Showing posts with label KPIs. Show all posts
Showing posts with label KPIs. Show all posts

Saturday, 1 March 2014

The Big Data Revolution: What Happened to Data Quality?

There's a wonderful irony in the world of Big Data Analytics. At a time when interest in Big Data appears to be growing exponentially, it appears that some are forgetting the fundamental challenges of Data Quality. A quick Google Trends analysis highlights the point. The chart below shows two trend lines extracted from Google Trends. The line in blue reflects the popularity of searches for Big Data, while the line in red shows the popularity of searches for Data Quality. It is important to note that the lines show relative popularity, not absolute volumes of search terms. In fact, Google Keywords suggests that in absolute terms searches for Big Data are about 20 times as popular as searches for Data Quality.


This raises an interesting question - what's happened to Data Quality? At a time when organisations are becoming ever more interested in using their data to create performance insights and predictions, interest the Data Quality appears to be declining. Is this because Data Quality is no longer an issue?

I don't think so. On three separate occasions in the last week alone I have been involved in discussions with senior managers from some of the world's leading manufacturing and service businesses. Each time, the issue of Data Quality has come up loud and clear. These firms recognise the potential of Big Data and Analytics, but are realistic enough to know that unless they sort out their data fundamentals - unless the track the right things and make sure the raw data if accessible and of high quality, all of the Big Data Analytics in the world is not going to help them. That's why - in the Cambridge Service Alliance - one of our projects this year is focusing on creating a data diagnostic - a methodology that can be used to check whether the data you have access to is appropriate and can be better used to optimise the delivery of your services and solutions. We're in the process of testing this data diagnostic at the moment and would love to hear from you if you'd be interested in being one of the pilot test sites.

Friday, 30 August 2013

Performance Planning: Why Is It Always Left to Right?

Language and processes matter in the world of performance management. Yet far too often we take the status quo for granted. Take, for example, the phrase "performance management". Numerous organisations are seeking to improve their performance management processes, but are they really focusing on the right issues? Performance management smacks of managing past performance - taking corrective action to ensure we hit our targets. Sure, this is important, but how much effort are organisations putting into performance planning - planning future performance, rather than managing past performance?

Some would argue that shifting your focus from performance management to performance planning is a trivial change of language, but think about the behaviours performance management provokes in your organisation. Often people get very defensive when it comes to performance management. They see the aim of the game as demonstrating to their managers that they are on top of things. They have everything under control. There's nothing to worry about. Bad news can get swept under the carpet and fundamental issues can go unresolved for years.

Contrast this rather defensive behaviour with the idea of performance planning - planning for future performance. No longer is the focus on what has happened and why it has happened. Instead performance discussions focus on where we want to be and how we are going to get there. Sure we'll still need to talk about why we are where we are, so we can understand what to do differently in the future. but the performance conversations become more constructive - no longer are they defensive, reviewing past performance. Instead they focus on the future and where we want to go.

If you start down this route then some interesting issues open up. Many measurement system design methodologies (including ones we have developed) start from the classic vision-mission-objectives approach. The methodologies ask you to think about where you want to be, how you are going to get there and how you'll track your progress. These are all eminently sensible questions, but in essence they are left to right questions. Start with the vision, define the the objectives, specify the targets, elaborate your initiatives and execute. An alternative (or complementary) approach is to plan right to left, or at least to check the validity of your performance plans by working right to left. Right to left planning involves looking at the detail and asking yourself if we deliver all of these plans and initiatives what will they add up to? Will they deliver the results we want? Right to left planning is a great way of checking the validity of your left to right plans. Checking whether you'll achieve the performance you want to.

If you want to check the robust of your approach to performance planning, just ask yourself three simple questions: (i) do we have the balance right between performance management and performance planning - or are our systems tilted either towards reviewing past performance or planning future performance; (ii) do our performance systems provoke open and constructive debate or do they drive defensive and potentially destructive behaviour - have we got the balance right between accountability and creativity in our performance systems; and (iii) how well do we validate our plans once developed - do we do the right to left sense check to establish whether all of the individual projects and activities we are going to undertake will add up to the overall plan we are setting out to achieve? If you are not confident that your performance systems are working well against any of these criteria, maybe it's time to take another look at how you approach performance planning.


Andy Neely

Friday, 11 January 2013

The Great Myths of Measurement: Satisfaction is Dead

In the late 1990s Jeffrey Gitomer wrote a book entitled - "Customer Satisfaction is Worthless, Customer Loyalty is Priceless" - a title which neatly encapsulates the second myth of measurement, "loyalty is better than satisfaction". What Gitomer and countless others seem to miss is that "loyalty" cannot and should not "supersede" satisfaction.

The way to think about this is to consider the evolution of customer measures. Years ago we used to think that complaints were a good way of tracking customer satisfaction - simply count how many times people complain and then you'll know how good your products and services are. We now know that measuring the numbers of complaints is not a particularly effective measure of customer satisfaction. There are two reasons for this - first, in some organisations it is difficult even to get a complaint registered! Second, and more commonly, people don't complain directly to their organisation, they simply tell their friends about their bad customer service experiences.

So we move from customer complaints to customer satisfaction. Here firms decide to be more proactive and go out and ask their customers what they felt. Hence the plethora of surveys and phone polls asking for your opinion about service experiences. Xerox collected data in the late 1990s that showed highly satisfied customers were much more likely to repeat purchase than customers who were merely satisfied, so again the focus shifted - this time to "how do we get highly satisfied customers that will keep buying from us". The natural evolution was to customer loyalty - how do we measure the loyalty of our customers? Do they keep coming back and buying again - delivery repeat business? Do they help grow our business by recommending it to friends and colleagues (think how popular the net promoter score has become in recent years).

The final twist comes with the introduction of customer profitability as a measure. This was prompted by work which recognised that some customer were undesirable. Bain and Co released data suggesting that 140% of bank's profits come from 20% of their customers. The other 80% actual cost the bank money. So there was a sudden flurry of activity where people were trying to work out which customers were profitable and which were not. For the unprofitable customers the choice becomes - can we reduce the cost to serve (and make them profitable) or should we fire the customers and stop dealing with them.

These different perspectives on measurement complaints to satisfaction to loyalty and profitability are often seen as a natural progression, with more mature companies measuring loyalty and profitability. This is simply wrong. And it is wrong for a very simple reason. We have to different between what the customer wants of the organisation (great service, good prices, etc) and what the organisation wants of the customer (their loyalty, a decent return by working with them, etc). Customer loyalty and profitability don't supersede customer satisfaction - they look at the issue of customer measurement through a different lens. Customer loyalty and profitable are what the organisation wants. Great service and good value for money is what the customer wants. Our measurement systems have to track both, as both perspectives matter in successful organisations.

Saturday, 8 December 2012

The Great Myths of Measurement: Start with Strategy

Pick up any text or article on performance measurement and chances are you'll find the phrase "start with strategy". The underlying message being that you should align your performance measures to your organisations strategy. How else can you be sure you are executing your strategy?

This oft heard cry "start with strategy" is the first of the great myths of measurement. Organisations don't exist to execute strategies. They exist to create value. Value for their shareholders (or funders). Value for their customers. Value for the wider group of stakeholders with which they engage. If you don't create value for your staff, attracting and retaining talent is challenging. If you don't create value for your suppliers, getting great service from them is not straightforward. If you don't create value for the community in which you operate, retaining their support and goodwill is difficult.

The point is that organisations exist for a purpose and that purpose is to create value for stakeholders, so surely the first questions we should ask ourselves - when considering what to measure - are: (i) which stakeholders matter to us; (ii) what do they value; and (iii) how can we measure whether we are delivering value to them. Strategy comes later. If I am clear about what my stakeholders value then I can think about what strategies I am going to purse to create this value. So let's stop the inane calls to start with strategy and focus instead on what really matters.

Sunday, 19 February 2012

The Fallacy of Leading Indicators


In recent months we’ve noticed an increasing number of executives asking “how do I get leading indicators”? It seems that everyone is frustrated by the fact that lagging indicators only report history and what has happened. And in today’s turbulent environment – where past performance is only a weak indicator of future potential – historical data has become even less useful. Hence the search for the magic leading indicators…

The problem with this search is that it is a fool’s errand. There’s no such thing as a leading indicator. Let us illustrate the point. Often people claim that customer satisfaction is a leading indicator. If you satisfy customers today, they’ll come back tomorrow and buy again from you. And even if they don’t come back, if they are happy, they’ll tell their friends about your great product or service and encourage them to buy from you. So customer satisfaction is a leading indicator of future sales.

Let’s look at this from a different perspective – let’s think about the link between customer and employee satisfaction. Many executives would argue that happy employees lead to happy customers. If employees are happy, they work harder, deliver better service, look after the customers more – hence customers are happier. So employee satisfaction is a leading indicator – it indicates what future customer satisfaction might be. But then customer satisfaction is a lagging indicator – at least it is a lagging indicator with respect to employee satisfaction. And therein lies the rub – customer satisfaction is both a leading indicator (with regard to future sales) and a lagging indicator (with regard to employee satisfaction). How useful is a categorization framework that allows a single item – customer satisfaction – to be both a leading and a lagging indicator?

So what’s the answer? All the talk of leading and lagging indicators is meaningless, unless you consider the context. What really matters is the relationship between the measures – the performance model that shows how different dimensions of performance interact and impact one another. To ask the question – what leading indicators should I use is naïve. The question we have to ask is what performance model am I using to run this business? A good performance model illustrates the relationship between the different measures, allowing managers to understand how value is created through a network of interacting elements.