Approaches to the service catalogue

Introduction

One of the major problems facing IT providers is that of how we can be both quicker and cheaper.  Quicker in the sense that we need to be able to respond to new requirements from our customers in a timely way.  Cheaper in the sense that we urgently need to reduce whole-life support costs of our systems (Gartner estimate that 80% of the total cost of ownership of any IT system comes from life-time support: thus only 20% constitutes the only aspect of cost that we traditionally concern ourselves with).

It seems that we can actually solve both of these problems at once, the magic bullet in this case being the concept of service catalogue.  The argument goes that if we have a service catalogue of standard offerings to our customers, then we are quicker because most of the time we can meet requirements by something off the catalogue, which we know how to do, so (assuming, always that we have adopted those other magic bullets of capacity management and capacity planning) meeting them should be just a matter of cranking the handle.  Similarly, we are cheaper because if we only support systems based on the catalogue, then we reduce the diversity of the install base, and so make it cheaper to support.

This argument is all very well as far as it goes, but unfortunately, as I will show below, it doesn’t go far enough, and in fact, if the service catalogue is implemented in the rather naïve way indicated above, then it will not do all we want of it; this is not to say that all those who have cried ‘the service catalogue is the answer’ are deluded, but simply that, as is the nature of magic bullets, the concept needs considerable refinement before it can be made to live up to its promise.

This note carries out the initial stage in that refinement, showing how what I will call a layered service catalogue can, by joining customer requirement management to a structured design process, go some way to making the potential benefits of reduced time to market (TTM) and total cost of ownership (TCO) realisable.  The method I will adopt is to analyse the naïve service catalogue view, to understand its benefits and disbenefits.  Then I will examine two antithetical approaches to maintaining the catalogue.  As it will turn out that the benefits and disbenefits of these two approaches are complementary, I will then perform the Hegelian two-step to arrive at a synthesis: the layered service catalogue.

Note that, for simplicity, I will discuss things from the point of view of an IT organisation offering services to its customers.  Of course, my discussion is entirely general, and so it can apply equally well to any delivery-based organisation.

 The naïve service catalogue

Description

This approach is very much as outlined as above, but I will put it more formally.  We have a catalogue of the services that it offers to its customers.  These are well-defined shrink-wrapped offerings, meaning that all design, etc work has been done and they basically consist of a blueprint from which a working copy of the service being offered can be built.

When a customer approaches us with a requirement, appropriate staff work with them to determine which offering from the catalogue is best suited to their needs.  Engineering staff can then be tasked with making an instance of that offering available for the customer (this may involve procurement, or simply reuse of existing capacity: for the purposes of my argument the question of where and the capacity is found is unimportant).

Benefits

TTM is indeed reduced, because all that we ever do is to take a canned solution and put it out on the machine-room floor.  All the nasty, time-consuming, risky requirement analysis, system engineering, scope creep, testing, etc that takes up so much time is relegated to … somewhere else.

Similarly, TCO is reduced, for us at least, in that now instead of having a huge number of specially designed systems, each lovingly tailored to meet a specific requirement, now all it has to support is serried ranks of clones of the systems on the catalogue.  This reduces the diversity of the systems to be supported, making it possible to manage the IT infrastructure not as many separate systems (as at present) but as a largely unified estate of capacity, reducing ongoing support costs.  Thus, life-cycle support is reduced provided that the catalogue does not get too large.

Disbenefits

Now consider the disbenefits.  As hinted in the comment above about TTM, this approach works perfectly provided customers never ask for anything new.  Unfortunately, this is just what they do all the time; the business does not stand still, so it is entirely unrealistic to assume that we can guess in advance what its customers will want from it in the next eighteen months.

What can we do to get around this?  Not much, to be honest.  We could use what I will call the yah-boo-sucks model, and tell our customers that we don’t care if they want something new: they’re not getting it.  That would be easy to do, and probably even fun, but it would not, perhaps, be very corporate-minded of us.  More practically, there are two antithetical approaches in common use today, which I will call the grand vision of the future and the vertical stovepipe.  Though neither is suitable on its own, it turns out that a dialectical synthesis of the two results in the required magic bullet.

Thesis: grand vision of the future

Here a bunch of clever people get together and try to design the shape of the enterprise n years ahead; generally they do this by coming up with an architecture for the enterprise, which defines precisely what services everybody should offer to their customers.  By appropriate crystal-ball gazing, the clever people hope to make the architecture sufficiently general that it will last n years.

There are three major problems here.  First, as we know too well, crystal-ball gazing is not a very effective way of predicting the future, so we can be sure that the day after our shiny new architecture is approved and set in stone, some annoying customer will come up with a requirement that we didn’t think of, that it can’t meet.

Second, and following from this, if the architecture is really general enough to predict n years ahead, it is most likely to be so general that it is of no use in actually trying to define the services on the catalogue.

Third, this has been tried many times, not always with significant success.

Thus, this approach does little to deal with the actual problem of unpredictable requirements, and turns out to be little more than a very fancy equivalent to the yah-boo-sucks model.  However, it does score significantly in terms of TCO: if this approach can be made to work at all then reduced diversity in the install base can be designed in from the start.  This means that we support only a limited range of technologies implementing a limited number of pre-defined services, and so support cost, and hence TCO is reduced.

Antithesis: the vertical stovepipe

Here, we get a new requirement that can’t be serviced from the catalogue and responds with the traditional cry of ‘we need a project!’[1]  A project (or other similar delivery vehicle) is initiated, and it carries out a complete stand-alone requirements capture – design – test – build – TTO cycle, producing a system tailored to the requirement, which is then added to the catalogue.

This is not actually too bad, in that it provides a relatively economical way to get new designs onto the catalogue.  In terms of TTM, requirements that need a new project take longer than straightforward off-the-shelf service requests, but at least the customer will know that in advance, and by virtue of having (hopefully) agreed that they need something new, not something already on the catalogue, they will be aware that they will expect to wait longer to get it, and they will be able to monitor progress.  In other words, they carry out the classic schedule versus quality trade-off.

The problems start when I look at the TCO implications.  If the vertical stovepipe is adopted naively (I will show below how it can be adapted to get around this, so bear with me for now) then as it designs the new system from scratch there is no mechanism in place to force it to make use of pre-existing components, and so the size and diversity of the catalogue increases, meaning that over time we have to support more and more diverse systems, so TCO increases.

Synthesis: the layered service catalogue

An architecture helps to reduce TCO, but is fundamentally unrealistic if it forces us to lock down the catalogue for however long.  Vertical stovepipes avoid lock down, but have a disastrous effect on TCO.  Somehow the two need to be joined together.

Let’s start by thinking about the process followed by the vertical stovepipe.  Basically it does the entire design waterfall: it starts from a requirement, then it designs a system: starting from a conceptual design, then adding on more detail, then choosing technologies to use, then designing components, then integrating them.

However, why does it have to do all of this?  Why can’t it reuse work already done before to design other similar systems?  For example, say I have been asked to produce a web service that has to meet all kinds of special security requirements that none of my existing web services are able to handle.  Do I really need to start again from scratch?  Isn’t there quite a lot I can reuse from my existing designs?

Layers

Say, therefore that we have defined a set of standard points along the path from requirement to completely designed system (where these points are doesn’t matter for now; of course, if a standard process like RUP were being used then obvious points would be at the end of each of the four major phases).  Say that whenever we design a new service offering for the catalogue, the standard design process forces the designers to document their design at each of these points and then enter it into a managed layered service catalogue.

This means that the catalogue consists of a number of designs, each belonging to a specific layer, which corresponds to designs at one of the standard points, and hence to designs existing at a precisely defined level of abstraction, so the higher (i.e. more abstract) the layer a design is in, the more easy it is to extend it to add new capabilities.

Below the top layer, each design has a specific parent, which is the design in the layer above from which it was derived.  Designs in the top (most abstract) layer derive from a corporate architecture defined by the business, which describes what the business proposes its portfolio should look like.

The bottom (most concrete) layer of the catalogue corresponds to deployable systems, so it is equivalent to the naïve service catalogue.  It is the catalogue of things that exist today.

Using the layers

Now say I have received a request from a customer for a particular requirement that can’t be met by one of my existing bottom layer designs.  I need to kick off a project (or whatever) to design and produce a system to meet the requirement.  What is new is that now I don’t tell the project (or whatever) design authority to go away and come up with a design.  Instead, she, and I look for something in the layered catalogue that can be used as a basis for meeting the requirement; in other words, instead of starting from scratch, she uses pre-existing designs in the layered catalogue to minimise the amount of new work she has to do (If she has to start from scratch, right at the top layer, then she has to base it on the corporate architecture; she would, of course, have to persuade me that the benefit to the business outweighed the obvious impact on TCO of doing so).

This carries obvious benefit: if the design authority increases reuse then we get to reduce risk, TTM (by not wasting time reinventing the wheel) and TCO because new systems will most likely be based on partial designs for existing systems, and so will be close to them, hence reducing diversity.  As all designs will be fully documented, we will have a good understanding of what it has to support.  Also, if we take care to include business criteria as well as technical criteria when deciding how much and what to reuse, we can trade-off quality against TCO.

What’s more, this approach gives me an architecture for free; what is the compendium of service designs (existing at various levels of abstraction) within the layered catalogue, but an architecture?[2]  Thus, it appears that this approach (particularly if supplemented by a structured design methodology that encourages good design practices, e.g. RUP, XP) gives another magic bullet: working strategically by thinking tactically.  If we follow it consistently, the result will be a consistent strategic architecture that evolves to meet changing business requirements.

Thus, I suggest that the following process can be adopted for using a layered service catalogue to meet a new requirement:

  • Always start at the bottom layer of the catalogue (services ready for implementation, capacity on demand).
  • When dealing with a new requirement, search in each layer, starting from the bottom, for the design best able to meet the requirement; if none can be found proceed to the next layer up.
  • At each point where the decision is made to proceed to a higher layer, a business decision must be made to trade off quality against impact on TCO of expanding the catalogue; as this impact increases the higher the layer you start from (as higher layers are more generic, and hence starting from them increases diversity in the lower layers) the more stringent this decision process should be.
  • Once a suitable design has been located and an estimate of TCO impact and TTM have been received and approved, authorise a project (or whatever) to implement the requirement starting from that design.
  • Add all new designs (at each standard point passed) to the layered design catalogue in the appropriate layer.
  • As the project progresses, place a decision point at the stage at which its design process hits one of the standard points: this is a good time to re-evaluate the projected TTM and TCO impact, and for us to determine whether to authorise continuing work.

What’s next?

I started off this note by expressing suspicion of magic bullets, and so, as I seem (mixing my metaphors horribly) to have pulled several rabbits out of hats in the course of my argument, it’s worthwhile to note places where more work is needed:

  • The ‘standard points’ in the design process need to be defined with sufficient precision that design authorities will know what they have to document and when.
  • Following on from this, a standard design process is needed; this must be fully documented, with specifications of the standard points, documentation standards, etc.  This is particularly important, as the magic bullet of ‘working strategically by thinking tactically’ is crucially dependent on this process (for example, it must strongly encourage SDAs to design with reusability in mind).
  • The process must be sufficiently rigorously defined and policed to prevent the natural tendency for design authorities to attempt to blur the distinction between the layers.[3]
  • We need to join up design processes across the business.
  • We need to work out whether ‘we need a project’ is really a suitable response to any new requirement that involves more risk than crossing the road.  A more nuanced, perhaps contract-based, approach may be preferable, particularly as customers will generally be entirely uninterested in how we deliver their requirement, provided that it does, and that they can measure its success in so doing.
  • We need an approach to risk analysis suited to thinking in terms of TCO rather than (as in classic PRINCE 2) pure delivery cost.

[1] Or its younger cousin, ‘we need a managed solution!’

[2] In fact, the layered catalogue could be said to be a library of design patterns on which I have imposed some additional structure (the layering, and the parent-child relationship between designs).  The key difference is that I have placed the layered catalogue in the context of a rigorous process which uses business criteria to drive the selection of a design (pattern), and not (as is more traditional) a techie’s passing whim.

[3] For example, the process could be articulated as a series of phases, each corresponding to the progression from a parent design to a child in the next layer down; e.g. RUP.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s