Taking the number of lines of codes produced as a measure of productivity rests on a number of problematic assumptions; chiefly that the number of lines of code a developer writes is proportional to progress through the project. This involves the notion that each line of code requires the same amount of work to produce. Another is that each line is of equivalent necessity towards the end of fulfilling the functions that the software is required to perform. Like any KPIs, the desired outcome expressed by managers will change their staff’s behaviour. Developers will produce as many lines of code as they can. In reality, the opposite situation is often more desirable; functionality should be achieved with the minimal possible code, so that the software runs as efficiently as possible. Number of Lines of Code is a highly flawed metric to use as a direct judgement of productivity.1 In this way KPIs can affect staff performance counterproductively.
Having said an increase in number of Lines of Code isn’t necessarily proportional to an increase in functionality, should managers then monitor the amount of user functionality that developers and teams are delivering? Each Function Point represents fulfilment of one step towards producing software with functionality required. These steps are already recorded in the task tracking tool the development team is used such as JIRA or Bugzilla, thus developers are not required to spend time inputting any further administrative effort.
Functionality of the software is central to the purpose of creating it. In this vein, Function Point counting might seem the ideal measure of productivity. However, each function is different; some are straightforward to bring to life, while others present tougher challenges for developers to solve. Further, developers should be incentivised to deliver a Function Point without causing issues which lead to further work for the team - and they shouldn't be encouraged to "cherry-pick" easier tasks. These behaviours aren't in line with good teamwork - especially the Agile methodology. So, one Function Point - or a particular number thereof - cannot be fairly compared with others. Different programming languages require different amounts of work.2 To track productivity and calculate efficiency, managers need a consistent measure of effort throughout stages of the SDLC.
Story Points represent a stage in a software development project and could be regarded as a higher-level version of Function Points. Story Points vary in size, and Agile teams usually ascribe a size value to each.3 The approach fits with Burn-Down Charts used in Agile to monitor the progress of projects' steps. But completion of each step doesn’t reveal any insight into the actual amount of work required. Managers still only have their original estimations to work with which may not be accurate. They may become aware of gross inaccuracies in estimation due to unforeseen challenges, but won’t be aware of, say, Story Points marked in the plan as ‘5’ that should have been ‘3’, but were executed slowly.
N.B. Use-Case Points (UCPs) represent a higher level in a software project plan; the first step in breaking down the high-level software requirement or Use-Case. Therefore, UCP count provides a less detailed picture of effort invested in a project than Function Points. McKinsey evangelise UCPs as a way of tracking progress4, as this methodology doesn't require further administration beyond the usual activities of team management.
Given the unreliability of metrics currently in use, as outlined above, how can organisations be sure they select the best way to deliver software projects and that these projects are on track? Finding a suitable means is the foundation of the research, started in 2001, that resulted in BlueOptima’s proven solution to measure progress invested in (and thus required by) any software project, Story Point, Function Point or individual task. BlueOptima's analytics engine provides a measure of Coding Effort, expressed in hours, of the work that software developers contribute towards source code repositories. This metric offers a basis on which to evaluate the efficiency of developers and teams both within your organisation and any outsourcers it utilises. Factoring in cost per hour to Coding Effort per hour, the cost-efficiency of various delivery options can be compared fairly. When managers perceive project progress to be faster or slower than they had planned, this may be due to forecasting more or less time required for the project. Using Coding Effort as a meaningful basis for assessing value for money removes this factor of inaccurate estimation. With experience, organisations can gain intelligence pertaining to the Coding Effort required for various Function/Story Points or projects, and thus better estimate the size of their requirements and therefore expected cost via various means of delivery. Having embarked on a project, BlueOptima provides insight into the rate at which developers produce meaningful Coding Effort, so that productivity drags can be resolved and progress monitored.
For a more thorough analysis of metrics that support cost-effective software development, please complete the form below to access the white paper: Four Critical KPIs for Outsourced Software Development. (N.B. We will not email you frequently or pass on your details to any third parties.)
1. Metrics are rigorously analysed in the white paper: Four Critical KPIs for Outsourced Software Development. Please complete the form above to receive this.
2. Ibid. p. 6
3. Typically a simplification of the Fibonacci sequence; 1,2,3,5,13,40,100; What is a story point? – AGILEFAQ
4. Enhancing the efficiency and effectiveness of application development – McKinsey, August 2013