Traps to avoid for your project: Deploying too many abstract indicators

scorecard infoscope

Good KPIs are easy to understand

This is probably the most frequent pitfall that we observed in the last faw decades. As in any other management project, indicators are strategic and will reflect your quality approach by making it concrete. Indicators are the main expected deliverable, and this is particulary true for a Governance Program of your software developments.

In this context, the “trap” lies in the temptation to express the quality of a software artifact with a magical synthetic indicator, which would be built from numerous heterogeneous and conflicting data. In short, badly designed indicators would be like hieroglyphics in your dashboards: No one can understand them, not even their own designer!

And when we fall into the trap, we usually get an unbounded value with no measurement unit. For example: my overall application quality adds up to 128; 72 for maintainability, 42 for robustness, etc. Doesn’t that ring a bell? As you’ll easily understand, these values do not mean much, when taken out of context. Moreover, they raise some embarrassing issues that will quickly highlight the poor design of your indicators and will weaken your quality management program at an early stage:

  • Problems for developers and managers to fully understand and communicate with each other

  • If an indicator is misunderstood, it won’t be adopted. In some cases, it will be fully rejected
  • The tool that computes these indicators will be challenged, replaced by some “homemade” indicators: this double reading will jeopardize your decision making and the resulting action plans

What is clearly thought out is clearly expressed!

The design of your quality indicators is a key step that needs time and attention. It must be clear, strong and indisputable, in order to allow your quality program to be understood, applied, adopted and continuously improved. Remember its essential role before building it: an indicator measures the gap between a situation and a target to be achieved, and helps you implement actions to reach your quality goal.

For example, one of the primary goals for every mobile application is its reliability during the exploitation phase:

  • Make sure the code is easily testable

  • Check that it has been effectively tested!

  • Reliability will be based on the following data:

    • Code testability metrics (cyclomatic complexity of methods, coupling between classes, etc.)
    • Unit Test information, such as code coverage from tools like JaCoCo, Emma, Clover, etc.
    • Data from function testing campaigns
    • To complete this overview of the software’s reliability, information from load testing tools can also be included. Indeed, if the app is not able to support thousands of concurrent users, unavailability due to performance and scalability issues would be considered a lack of reliability.

The characteristics of an efficient quality indicator

Without diving into the specifications detailed in ‘NF X50 -171′ or ‘ISO 9001:2008′, we could say that an indicator is efficient when it meets the following requirements:

  • Relevance and usefulness: How well does the indicator reveal the gap between the situation and the quality target?

  • Simplicity: a good indicator should be explained with a few words. Say what it does, do what it says, then check. It must also indicate if the target has been reached or not (see the “Goal Question Metric” approach by Victor Basili).

  • As a representative indicator, make sure it is:

    • Comprehensive: The indicators must be available for all artifact levels (portfolio, app, package, etc.). Actually, this point is made easier to cover with static code analysis tools that automate data collection.
    • Quantifiable: for example, code coverage metrics, total bugs found in execution, rule violations found in code, sum of new code lines delivered, average of days of delay for a delivery, etc.
    • Objective: the components of your indicators should not be controversial. The number of lines of code to measure the size of a software artifact: Should blank lines be counted? Commented lines? Only compiled instructions? Should generated code be counted as part of the software?
  • Deployability: It’s one thing to sketch your indicators on paper, but quite another to deploy and to make them “live” with real data. Often, the data is not in the right format, scattered from different tools, or even does not exist at all for some artifacts. You must verify your indicators’ availability and scalability.

Technical Debt indicators: relevant, ready to use and easy to deploy

Technical Debt indicators are particularly appropriate for starting your Software Quality Management project. Initially invented by Ward Cunningham in the 90′s, the Technical Debt refers to those tasks (feature, bug fixing, code refactoring, architecture optimizing, etc) that a development team would – whether willingly or not – put off to a subsequent sprint or product release.

It immediately provides the team with a breath of fresh air, but as in any other borrowing operation, the later you reimburse your debt, the higher the interest rate will be. In other words, if a feature costs 100 to be developed today, it will cost 100 + something tomorrow, 100 + something higher the week after and so on.

Many IT companies (such as Squoring or Inspearit with the SQALE method) have implemented this concept to provide development teams with tools to identify and quantify the amount of their applications’ technical debt, generally expressed in days. Thanks to these measurements, these kinds of indicators become powerful tools to steer the quality of software development projects.

  • Usefulness: Technical Debt heavily weighs on teams’ agility and their innovation capacity. A high technical debt involves significant efforts allocated to corrective maintenance, to the detriment of the added value of new features’ delivery.

  • Ease of Understanding: Technical Debt speaks for itself when displayed in UOW (time, money…), from the CIO to the developers through DevOps. Workload of each postponed task – or one delivered with an unexpected level of quality – is summed up to measure the total debt of an application, a project or an artifact.

  • Ease of Deployment: At its simplest, technical debt is computed by accumulating workloads of nonconformities. Thus, your first attempts to measure project debt can be carried out with a simple Excel sheet.
  • Reconciliation of technical and managerial visions: By providing stakeholders with relevant, understandable and non ambiguous information, they use a common framework to act on real software quality issues, rather than discuss the definition of the indicator itself.

  • Task-oriented by nature: this is a huge advantage of the Technical Debt indicators. Computed from nonconformities found in the project (code, documentation, requirements, etc.), it allows you to list the measures needed to reduce your technical debt with an extreme ease, in an efficient and objective way.

Give yourself a chance to manage your IT projects efficiently