An approach to measuring quality

Introduction

Classically we divide the parameters of a project under three headings: time, cost and quality.  Now, none of these are entirely unproblematic when it comes to definition, but at least with time and cost we can set an agreed and understood target and know whether or not it has been met.  For quality we cannot do this, and yet it is, if anything the most important of the three headings.

Because of this, many efforts have been made to establish ways of measuring quality, both in terms of specifying what is required, and testing how well what is delivered matches the requirement.  These are, by and large, unsuccessful.  Therefore, in this paper I propose a new approach to measuring quality based not on enumerating a customer’s requirements, but rather on evaluating, and attempting to mitigate, the risks to the customer’s business.

I start with a brief analysis of the problem: defining quality and what it is about it that makes it so hard to measure.  This acts as motivation for the introduction of the new, risk-based approach.

Quality

What is quality?  It is the answer to two linked questions:

  • How to specify the desired thing?
  • How to measure how the delivered thing measures up against the desired thing?

Now, it may be argued that quality is strictly only relevant to the second of these questions.  But if we are to measure the quality of a delivered thing, we must have a standard against which to measure it, and we can measure only those attributes of the delivered thing that are defined in the standard.  Once we have a standard, it is a matter of relatively simple statistics to score each measured attribute of the delivered thing and then combine the scores to formulate a total measure of goodness of fit of the delivered thing to the standard.

So, given the standard we can measure quality.  Where does the standard come from?  By its very nature it consists of a collection of defined, measurable attributes that the delivered thing should have, with, for each of them, a target value that the delivered thing should achieve.  But this is nothing more than a specification of the desired thing: we desire that the thing have these attributes, and that their values should be the give targets.

Therefore, if we are to measure the quality of a delivered thing, we must have a well-defined specification of the desired thing.  This may seem obvious, but it is a crucial notion that is often forgotten.  Quality cannot (as some appear to believe) exist in a vacuum; it goes together with a requirement, and the form that the measure of quality can take is constrained by the way the requirement is specified.

Problems with measuring quality

Why is quality so hard to measure?

Having established the key relationship between quality and requirements, the answer to the question of why quality is so elusive as a metric is virtually trivial: quality is hard to measure because far too often requirements are expressed badly.  Let us consider some reasons for this:

  • Functional requirements are easy to specify; non-functional requirements are hard.  This is a classic problem, in that it is very easy to specify (say) the protocol for a transaction, but very hard to define exactly what is meant by (say) ease of use.  This leads to ‘I’ll know it when I see it’ type requirements, which means that any rigorous approach to quality goes out of the window, which is good neither for deliverer nor customer.
  • If the desired thing is of any complexity, then the requirements (even functional ones) are not independent, and it can turn out to be the case that requirements are actually mutually exclusive, or at the least inhibit one another.  If this happens there is a requirement for a non-functional decision as to the desired balance between the exclusive requirements, which by introducing a non-functional element to an apparently functional requirement, renders objective measurement of its quality extremely difficult.
  • Even functional requirements carry with them a penumbra of non-functional requirements, because things are complex and do not exist in isolation, so to deliver a particular kind of transaction one needs a widget of type X, which can only handle so many transactions per hour, and so suddenly in trying to meet one functional requirement we are introducing (or worse, violating) another non-functional requirement.

Therefore, a direct approach to defining quality, carried out by means of direct measurements of qualities of the delivered thing against attributes of the desired thing, will be subjective, and therefore useful only as something to wrangle about.  We need another approach.

Over-precise specifications

The primary problem with any specification of the desired thing is that, unless one is extremely careful, it turns into a specification of the required solution.  That is to say, as soon as I say that the delivered thing should have the following qualities, I am limiting the possibilities for what I will get.

Now this is, to an extent, what I want.  I do not, after all, want the delivered thing to bear no relation to my needs.  But that is the point.  I have a need, and I can specify that need.  However specifying a need, that is saying what is wrong, is very different from specifying a desired thing, that is saying what will satisfy the need.  If I go to a supplier and say ‘I have this problem, please solve it‘ then I leave them free to come up with the best solution they can find.  If I say ‘I need a widget‘ then they will build a widget, even if that is not what I actually need.

For example, the obvious solution to my need may be to deliver a particular kind of widget, but analysis of the problem domain might show that in fact (say) a change in business processes could solve the problem.  But if I asked my supplier to deliver a widget then they will have failed in their task if they do not deliver a widget, even if business process change could have been more effective, and so they will naturally deliver a widget.  Thus, by specifying the desired thing, one falls into the trap of dictating to the supplier, and hence constraining their ability to think creatively.

To put this more formally: unless requirements are specified in terms of problems to be solved they end up as solution specifications, not requirement specifications.  Existing techniques – requirement specifications, use cases, etc – are almost exclusively solution specifications.

The risk-based approach

Basic approach

So the outcome of this discussion is that what is required is an approach which uses strictly measurable qualities to determine goodness of fit of the deliverable to the need, and specifies the need purely in terms of them problem, and not in terms of the desired solution.

There is a very simple approach to this.  Any problem that a business might have can be expressed in terms of risk to the business’ operation.  If there was no risk we would not have a reason to change anything or to require some delivered thing.  Therefore we:

  • Specify the need in terms of the risks to the business that we want the deliverable to mitigate
  • Measure the quality of the deliverable in terms of how effectively it mitigates those risks, that is to say, what is the quantity of the original risk outstanding once delivery is effected

It is clear that this does not fall into the over-specification trap, as the specification consists only of risk definitions; risk mitigation (determining the deliverable) is left up to the supplier.  Thus the delivery should be seen not as the fabrication of a thing, but as a process of managing and mitigating the selected risk.  This means, in particular, that this approach is suitable for ongoing deliveries, where the deliverable is a continuing service rather than a one-off change or object.

This is already a huge step forward as compared to existing approaches, so we might be tempted to stop at this point.  However, it is not clear that this approach gives us objective measurements of quality that can be used to avoid wrangling.

Objective measures of risk

There are many approaches to quantifying risk, and we will not choose any one.  Rather we will specify key criteria that the measure must satisfy:

  • It must be generic, so it is not tailored to the specific risks under consideration, but is applicable to any risk.  If we tailor risk-measurement to the situation at hand we are, in effect, specifying the solution, only now we are doing it by quantifying risk so precisely that only a limited class of solutions can mitigate it.
  • It must apply uniformly to any risk mitigation approach.  If the measurement technique favours certain approaches, one again we are specifying the solution, not specifying the problem.
  • It must be simple and deterministic, so there is no argument as to the initial quantity of risk and the residual risk after delivery.  Simplicity is also a good way of guaranteeing genericity, simply because a simple approach cannot be over-engineered to meet a specific problem.
  • It must be used uniformly.  If it does not apply to all parts of the delivery at all times, inevitably biases will creep in that will result in solution specification by favouring particular areas of concern, or aspects of the solution.

Therefore, at the start of any project the customer and deliverer should agree on a standard, generic risk quantification formula (one of the standards based on quantifying probability and impact in terms of cost, time and quality as high, medium or low should be more than adequate) and use it consistently for the entire length of the project and any subsequent after-care.

 

One thought on “An approach to measuring quality

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s