Identifying Reliable, Objective Software Development Metrics

In a broad sense, there are several aspects of software development that remain loosely defined. The flexible definition of its components is, in part, because software development is a constantly evolving skill set. How developers work independently and in teams is growing as rapidly as the development languages and frameworks they are coding with.

Tracking developer performance and optimising workflows on development teams is no new challenge. Thanks to more access to reliable performance data, lead developers and management are gaining more insight before making critical decisions in their hiring and production processes.

In a broad sense, there are several aspects of software development that remain loosely defined. The flexible definition of its components is, in part, because software development is a constantly evolving skill set. How developers work independently and in teams is growing as rapidly as the development languages and frameworks they are coding with.

Tracking developer performance and optimising workflows on development teams is no new challenge. Thanks to more access to reliable performance data, lead developers and management are gaining more insight before making critical decisions in their hiring and production processes. 

The crucial question isn’t whether we must look to data when growing our teams and developing our workflows. Hiring managers already know that data-based decision-making is essential. Instead, the industry focus is on the very definition of what software development metrics are and which of those metrics management needs to rely upon when shaping their teams and workflows.

SeaLights, a pioneer company in the space of Software Quality Governance & Intelligence Technology, defined it best:

“Software development metrics are quantitative measurements of a software product or project, which can help management understand software performance, quality, or the productivity and efficiency of software teams.”

With a proper definition of software development metrics, the importance of collecting, observing, and putting those metrics to use is easy to see. 

Since long before code collaboration tools were a vital asset for identifying productivity gaps in software development workflows, there were text-based ways of tracking software performance and quality(logs) that benefitted greatly from the modern approach to collecting software performance metrics. 

However, the evolution of those metrics collected and managed is pioneering how teams and software are built.

To better understand the aspect of software development metrics that we are referencing, it is best to identify some standard methods currently applied in this field.

1) Counting Lines Of Code

Counting lines of code, also commonly referred to as LOC or SLOC (Source Lines of Code), is one of the more old-fashioned ways of tracking developer productivity. When loosely applied, it is helpful if the KPI is to increase productivity. However, when the goals set for developers are reliant upon counting lines of code, it is easy to lose track of quality.

“Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.” -Bill Gates

In addition to only being a metric of productivity— that is, one that doesn’t apply much weight to quality over quantity —another critical issue with this software development metric is that functionality for modern applications should be achieved by using the minimal possible code so that the software runs as efficiently as possible. Although one of the oldest, the LOC metric is also one of the most flawed. Managers need to be wary that KPIs that focus only on output can encourage adverse effects on staff behaviour in similar ways.

2) Function Points

A Function Point is a software development metric representing the fulfilment of one step towards producing software with a specific function required to complete the project. Often, Function Points are managed in task-tracking and productivity applications commonly used by teams, such as JIRA or Subversion.

The overall functionality of any software is central to the purpose of developing it. As such, the Function Point as a software development metric might seem like an ideal measurement of productivity. 

However, not all functions are created equally. 

Some features are cornerstones of software functionality and performance, whereas others are less crucial to the application’s performance. Additionally, the number of development resources needed to complete each Function Point should vary greatly. The prioritisation of each different Function Point has to consider more than just how many tasks need to be performed and completed, but rather how they are managed.

Different programming languages require different amounts of work. To track productivity and calculate efficiency, managers need a consistent expression of effort throughout stages of work, rather than monitoring the success or failure of a workflow hinging upon a designated number of completed Function Points.

3) Story Points

Story Points are a software development metric representing a stage in the project rather than a specific date or amount of time. As such, they are often treated as a higher-level version of the aforementioned Function Points. 

Software development giant Atlassian summarises Story Points more simply as:

“…units of measure for expressing an estimate of the overall effort required to fully implement a product backlog item or any other piece of work.”

Atlassian also identifies that some of the benefits of using Story Points as a software development metric are:

  • Dates don’t account for the non-project-related work that inevitably creeps into our days: emails, meetings, and interviews that a team member may be involved in.
  • Dates have an emotional attachment to them. Relative estimation removes the emotional attachment.
  • Each team will estimate work on a slightly different scale, which means their velocity (measured in points) will naturally be different. 
  • Once you agree on the relative effort of each Story Point value, you can assign points quickly without much debate. 
  • Story points reward team members for solving problems based on difficulty, not time spent. This keeps team members focused on shipping value, not spending time. 

This approach to productivity tracking with Story Points as a metric fits with the Burndown Charts used in Agile to monitor a projects’ progress. 

However, the completion of each step doesn’t reveal any insight into the actual amount of work required. Consequently, managers are left with only their original estimations (of time) to work with, which may not be accurate. They are likely to become aware of gross inaccuracies in time assessment due to unforeseen challenges in the software development process.

4) Sprint Burndown Charts

A Sprint Burndown Chart is a software development metric that compares the remaining workload adjacent to the amount of time left to finish the entire project. 

As a software development metric, burndown charts help managers efficiently (and quickly) know the progress of a current stage of software development. By quickly seeing remaining Function Points at a glance, managers have the metrics necessary to determine when the project can reach completion.

On the whole, burndown charts are easy to grasp by both software development teams and their management. When developers can easily see the progress of their work, they are more likely to be incentivised to reach the finish line. In an industry that is heavily reliant upon complex code, burndown charts are excellent because they are simple to understand at a glance and get straight to the point. 

However, they are not also without their disadvantages.

“The disadvantage of a burndown chart is that it often distracts teams from understanding what is going on under the surface as they focus on improving the chart itself. It leads to a wrongly-prioritized backlog, unclear requirements, unrealistic expectations, and deadlines.”  -Thierry Tremblay, CEO & Founder at Kohezion

5) Code Churn

Code churn is the software development metric referring to code rewritten or deleted shortly after being written the first time. 

It is common for software developers and engineers to write code, test it, allow it to break, present bugs, and rewrite their code to solve better the issue they are working on. 

Like every other metric before it, this is an essential tool for hiring managers to consider when making decisions. While developers are expected to churn their code, seeing a high churn rate from specific developers on a team might mean that they are better suited for other projects than the one(s) they are working on.

“Tracking code churn is a way to think about the percentage of code that sticks around to deliver business value.” (Source: PluralSight)

6) Commit Frequency Or Number Of Commits

As with the software development metrics above-mentioned, Commit Frequency is a standard but also flawed software development metric.

Commit Frequency refers to the number and frequency of code commits that developers make while working on a project. While it makes sense to have benchmarks set for how often developers are committing actual work to the codebase, setting the Commit Frequency as a KPI rather than just observing it as a software development metric is risky. 

As with other volume-based productivity metrics, managers risk sacrificing quality for quantity when setting volume-based metrics such as this one, which is also commonly referred to as commit counts. A high number of commits does not indicate overall productivity or quality of the delivered work. The same logic can be applied to the number of bugs fixed, another standard and grossly flawed software development metric.

An Innovative Software Development Metric: Coding Effort

Given the unreliability of the software development metrics currently in use, organisations are regularly left with few tested metrics to keep their projects on track and their teams working as efficiently as possible.

Finding a suitable means is the foundation of the research that resulted in BlueOptima’s proven solution to measure progress invested in (and thus required by) any software project, story point, function point, etc.

BlueOptima calls this solution the Coding Effort. The measure of Coding Effort is a software development metric expressed in hours of software developers contributed to code repositories. This metric offers a basis for evaluating the efficacy of developers and teams within an organisation, various outsourcers – and individuals and groups within the outsourcers you use. 

Thus, organisations can compare various options fairly in terms of cost-efficiency. When managers perceive project progress to be faster or slower than they had planned, this may be due to forecasting more or less time required for the project. Using Coding Effort as a meaningful basis for assessing value for money removes the costly factor of inaccurate time estimation.

With experience, organisations can gain intelligence about Coding Effort required for different Function/Story Points or projects. Factoring in the time per effort and the cost per time that internal or external developers can supply afford well-informed estimations of time and cost for each option.

Having embarked on a project, BlueOptima provides insight into the rate at which developers exert Coding Effort so that productivity slowdowns can be resolved and future progress monitored.

[a-z]
[a-z]
[type='submit']
[type='submit']
[a-z]
[a-z]
[type='submit']
[type='submit']