The Tyranny of Numbers

Lecture to the Royal Society of Arts at Gateshead, 18 October 2001.


As you might imagine, one of the Department of Health’s numerous targets is designed to measure the time patients take to get treatment in casualty.

What most of us don’t know is that, actually, they don’t measure the time people arrive there.  No, they measure the time between being seen there and getting treatment – another matter altogether.

Which is why one 88-year-old patient, who waited a horrific 24 hours to be treated, was recorded as having officially been there only for 30 minutes.

Health targets are a fruitful area of search for this kind of thing.  For example, there’s a rule that patients shouldn’t be kept on hospital trolleys for more than four hours.  In practice, some hospitals have got round this by putting them in chairs. 

Others have bought more expensive kinds of trolleys and re-designated them as ‘mobile beds’.

Everyone has stories like these, partly because we live in a culture that takes numbers terribly seriously, that relies on numbers as an unambiguous tool of efficiency – but also, over the past year, has become more and more suspicious of them.

It wasn’t just George W. Bush accusing his opponent of using ‘fuzzy math’ in the elections last November.  It’s been the Welsh and Northern Ireland assemblies abandoning school league tables.  It’s been the new government abandoning hospital waiting list targets.  It’s been the revolt of French economic students, complaining that their courses are full of what they call ‘autistic’ mathematical abstractions that don’t relate to the real world.

All this is specially topical at the moment because targets, indicators, measurements, numbers are so much part of the government’s favoured method of central control.

The examples I used just now were all ways that statistics can be misused – and you only have to turn on the TV to find more where they came from.  I was told in the USA with an air of outrage recently that 50 per cent of doctors graduated below average from medical school.  They didn’t say, of course, that exactly 50 per cent of doctors also graduated above average.

But I want to go further than misuse – and be a little sceptical about using numbers at all.  Not completely of course.  A world that never measures or counts is really beyond our control.  The trouble is that we’re in danger of doing little else.

We are in fact living at an age when life is completely overwhelmed by numbers and calculation.  And somehow especially men.

It’s usually men who can reel off sports scores way back through the decades. Or school league tables, voting figures, inflation rates and other bizarre facts.  I am as bad as all the rest.

As a man, I know that the average Brit spends 45 hours on hold on the phone every year, that 3.7 million Americans claim to have been abducted by aliens.  I know that the average Briton has sex 2,580 times in their life, with five different people.  If these mean anything.

The trouble is, we are all – men and women – increasingly controlled by targets.  As employees, we’ve got performance figures on the walls in every office, under every clock and in every lunch room.

And as consumers, we’re counted every time we buy anything.

The government’s also introduced about 8,000 new targets or numerical indicators of success since coming to office in 1997 – and recently added 150 new environmental targets to the pot. 

Accountancy firms cream off anything up to 10 per cent of British graduates to do all this counting.  Whole armies of number-crunchers are out there, boosting the cost of running transport, the NHS or social services.  As Professor Michael Power says, this is in fact a gigantic experiment in public management.

In some ways, we’ve been here before – especially in periods of great social hope like the 1830s, when the followers of Jeremy Bentham rushed across the country in stage coaches, armed with great bundles of tabular data – and measuring everything they thought was important. 

They measured the number of cess-pits (which they saw as an indicator of ill-health), or pubs (an indicator of immorality), or – to find out how religious children were – they tested the number of hymns they could recite from memory.

We cab see why this didn’t work so much more clearly in a society which isn’t our own.  Back then, it also produced the kind of myopic characters, obsessed with quantity rather than quality, that Dickens lampooned in the character of Mr Gradgrind in Hard Times.

Remember Sissy Jupe the circus girl, who knows all about horses because she was brought up with them, but finds she can’t actually define one.   “Quadraped,” says the class swot.  “Graminivorous.  Forty teeth, namely twenty‑four grinders, four eye‑teeth, and twelve incisive.  Sheds coat in spring; in marshy countries, sheds hoofs, too…. And so on. 

“Now girl number twenty,” said Gradgrind, turning back to Sissy.  “You know what a horse is.”

We’re not in that kind of world now, though we’re in danger of living in a related one.  What makes it different perhaps is that, because we expect more from our measurements, we’re applying the same idea – that we can control things if we count them – to more and more elusive but vital areas of life.

Doctors and psychiatrists are encouraged to diagnose patients according to numerical checklists because it seems more ‘open and scientific’. Companies are developing the idea of microchip implants for their workforce to measure their timekeeping. 

The Japanese multinational Matsushita has developed a ‘smart’ toilet that measures your weight, fat ratio, temperature, protein and glucose every time you give it something to work on. Then it sends these figures automatically to your doctor – probably the last thing they want.

The frightening thing is that, just because computers can count and measure nearly everything, then we do.

There was a time when we could trust our own judgement, common sense and intuition to know if we were ill or not – without relying on a computerised lavatory to tell us.  Now we’re in danger of being unable to do anything without it being measured first.

I don’t just mean the politicians who can’t move without checking the polling data.  I mean those expensive academic surveys which tell you what we already knew.  Like the recently discovery that the death of a parent can scar a child for life, or that alcoholics have an unusually high depression rate.  Surprise, surprise. 

One British research foundation has found that unemployment tends to be higher in areas where there aren’t many job vacancies.  The University of Michigan has also recently revealed that children who don’t take exercise and eat a lot of junk food tend to be fatter.

But of course the most direct difficulties that all this measuring is causing us is the unexpected effects of it on people’s behaviour, as we saw with the mobile trolleys. 

Like the school league tables which made teachers concentrate on getting borderline pupils through at the expense of their weaker classmates.  Or the hospital waiting list targets that meant NHS managers only treated the quick simple problems, at the expense of everyone else. 

In both cases the people being measured did what made the figures look better, not what they should have done.  It’s a horribly modern disease, and its effects can be disastrous.

Look at the call-centre industry, for example.  It has a very high burn-out rate, a turnover of staff between 60 and 80 per cent every year and – with a few notable exceptions – it’s making the relations between companies and customers even worse than before.  And the main reason is that, because of all that IT equipment, every moment of staff time can be measured.

Building customer loyalty is hard to quantify, so what gets measured is how quickly staff can get callers off the phone.  That’s the bottom line.  Even the few seconds between calls – known in the industry as ‘post-call rap time’ – gets measured and logged.  And then companies wonder why their customers hate them.

But behind all this measuring is what I believe is an old-fashioned world view.  The idea that large bureaucratic empires can be managed by numbers.  That there is actually nothing that human beings can do that machines can’t do better.  That high-tech is always more effective than what John Naisbitt calls ‘high touch’.  That – as McKinseys consultants say – ‘everything can be measured and what can be measured can be managed’.

It simply isn’t true, and – if you’ll forgive me – I want to run through ten peculiar ways in which these attitudes can bite us from behind, so to speak.

The first is that you can count people, but you can’t count individuals.

The trouble with the concept of Average Man – as it used to be called – is that he only exists in the statistical laboratory, measured at constant room temperature by professional men with clipboards and white coats.  Figures reduce people’s complexity, but the truth is complicated.

Policies that are measured so that they fit Mr Average precisely, won’t actually suit anybody very much.  He’s a symptom of reducing humans to numbers, and forgetting there are any other dimensions to human beings that make a difference.

Paradox No 2: You change what you’re counting by counting it.

Simply because it is so hard to measure what’s really important, governments and institutions pin down something else.  They have to.  But the consequences of pinning down the wrong thing are severe: all your resources will be focussed on achieving something you didn’t mean to – as we saw with the trolleys and league tables.

How do you measure the success of a military unit in the Vietnam War?  Answer: body count.  Result: Terrible loss of life among the Vietnamese, but no American victory. 

How do you make sure schools are living up to parents’ expectations?  Answer: test the children as much as possible.  Result: exhausted kids who can see no further than exams.

How do you make trains more punctual?  Answer: measure how often they’re late.  Result: train companies simply lengthen the official journey times. 

Modern management is by quantifiable targets, which will always – almost by definition – miss the point.  It has all the makings of a fairy tale.  If you choose the wrong measure, you sometimes get the opposite of what you wanted.


Paradox No 3: numbers replace trust, but make measuring even more untrustworthy. 

When farmers and merchants didn’t trust each other to provide the right amount of wheat, they could use the standard local barrel stuck to the wall of the town hall, which would measure the agreed local bushel. 

When we don’t trust our corporations, politicians or professionals now, we send in the auditors – and we break down people’s jobs into measurable units so that we can see what they’re doing and turn it into figures.  If doctors hide behind their professional masks, then we measure their deaths per patient, their treatment record and their success rate, and we hold them accountable. 

It wasn’t always like that.  Previous generations realised that we lose some information every time we do this – information the numbers can’t provide. 

Go back a century or so, and even accountants recognised this.  “Use figures as little as you can,” said the first American accountant, James Anyon, giving advice to the next generation in 1912.  “Remember your client doesn’t like or want them, he wants brains.  Think and act upon facts, truths and principles and regard figures only as things to express these, and so proceeding you are likely to become a great accountant and a credit to one of the truest and finest professions in the land.”

It’s the same with doctors.  Despite the checklists and online expert systems, or the Korner tests, we can never fully measure what they do.  Doctors have intuition born of experience.  They have know-how that slips through the measurements.

So here’s the paradox.  Numbers are democratic.  We use them to peer into the mysterious worlds of professionals, to take back some kind of control.  In that sense, they are the tools of opposition to arrogant rulers.  Yet in another sense they’re not democratic at all.  Politicians like to pretend that numbers take the decisions out of their hands, but it’s a fantasy.  They, we, have to use our judgement in the end.

Paradox No. 4: When numbers fail, we get more numbers. 

Because counting and measuring is seen as the antidote to distrust, any auditing failure must need more auditing.  That’s what society demanded the moment Robert Maxwell had fallen off his yacht into the Bay of Biscay – owing, incidentally, twice as much as Zimbabwe. 

Nobody ever blames the system – they just blame the auditors.  Had they been too friendly with the fraudsters?  Had they taken their eyes off the ball?  Send in the auditors to audit the auditors.

But it’s worse than that, according to paradox No.5: The more accurately we count, the more unreliable the figures are. 

We can’t afford to miss anything if our measurements are going to be absolutely precise.  So you get a peculiar phenomenon when the cost-benefit experts spend enormous efforts getting a figure absolutely correct – only to throw something else which is simply plucked from the air. 

This problem has a long tradition, back to the French statistician Adolphe Jullien in 1844, who worked out precisely the cost of moving one unit of traffic on their new rail system.  He finally came up with a wonderfully exact figure of 0.01254 francs per kilometre.  But what about administration and the interest on capital?  Ah yes, he says – but these were more difficult to assess, so he arbitrarily doubled the figure. 

The great American cost-benefit experts, the US Army Corps of Engineers,  would spend months on the exact cost-benefit of new waterways they approved of in the 1930s, then shoved in a notional 600,000 dollars for national defence and 100,000 dollars for recreation. 

Cost-benefit is often an unusual amalgam of the precise and the arbitrary.  It has to be if it’s going to be accurate.  It’s like Lewis Carroll’s story about the little boy who comes up with a figure of 1,004 pigs in a field.  “You can’t be sure about the four,” he is told.  “And you’re as wrong as ever,” says the boy.  “It’s just the four I can be sure about, ‘cause they’re here, grubbling under the window.  It’s the thousand I isn’t pruffickly sure about.”

Which brings us to Paradox No.6: The more we count, the less we understand. 

Numerical measurements are the international tools of scientists.  They allow experts to “speak one and the same language, even if they use different mother tongues,” said the philosopher Karl Popper.  Auditors look for measurements with no human content in their search for pure objectivity – like the metre, one 10 millionth of the distance from the pole to the equator.

But to do this, everyone has to count in exactly the same way, in laboratory conditions, taking no account of local variation or tradition, yet that means the figures aren’t as informative as they might be. 

Decisions by numbers are a bit like painting by numbers.  They don’t make for great art.  When you reduce something to figures or the bottom line, you lose information, and the Tower of Babel – speaking of different tongues – comes tumbling down again.

Another side-effect of all this is that the objective people who use numbers have status, and the people whose lives they are studying usually don’t.  This is how the development economist Robert Chambers put it:

“Status, promotion and power come less from direct contact with the confusing complexity of people, families, communities, livelihoods and farming systems, and more from isolation which permits safe and sophisticated analysis of statistics¼  The methods of modern science then serve to simplify and reframe reality in standard categories, applied from a distance¼  Those who manipulate these units are empowered and the subjects of analysis disempowered: counting promotes the counter and demotes the counted…” 

Then he ended up with a neat little verse that summed it all up:

“Economists have come to feel

What can’t be measured, isn’t real.

The truth is always an amount

Count numbers, only numbers count.”

And so to paradox No.7: The more we count, the less we can compare the figures. 

Different people, different eras, different places count differently.  Chambers studied 22 different erosion studies in one catchment area in Sri Lanka, but the figures on how much erosion was happening varied by as much as 8,000-fold. 

The lowest had been collected by a research institute wanting to show how safe their land management was.  The highest came from a third world development agency showing how much soil erosion was damaging the environment. 

The frightening part is that all the figures were probably honest, but the one thing they failed to provide was objective information.  For that you need interpretation, quality, imagination.  It’s a paradox.

Take crime.  In 1978, the Chief Constable of Greater Manchester, the fearsome James Anderton, made a challenging speech about crime figures for England and Wales.  There had been 77,934 recorded crimes in 1900, he said.  In 1976, there were 2.1 million.  The country was horrified at this apparently objective measure of criminality.

But anyone who’s read Geoffrey Pearson’s classic Hooligan will know just how differently people defined such things in generations gone by.  In 1900, the recording of crime was pretty informal, and what would have been seriously violent now, might then just have been charged as a simple assault or drunkenness. These were the days when one in four London policemen were assaulted every year.

Crimes against property, on the other hand, really upset the Edwardians and late Victorians.  And when policemen in particularly violent areas tried to make an arrest, they were liable to be surrounded by large threatening crowds shouting “boot them!”


The past is a foreign country, as L. P. Hartley said – they do things differently there.  And if you doubt it, you can read the complaints of the chaplain of Newgate Gaol in the days of the arch-number cruncher Jeremy Bentham – that all the boys in prison kept a mistress, including those aged nine and ten.  Simple figures can’t possibly compare such different worlds.

Crime statistics tell you more about what we fear, and what the police are concentrating on, than anything objective.  If you doubt it, ask the police.

Paradox No.8 is more controversial: Measurements have a monstrous life of their own. 

Stalin announced his first Soviet five-year plan in 1928, an enormous undertaking which was planned to increase industrial output by 235.9 per cent and labour productivity by 110 per cent – but the figures were completely spurious.  They were intended to lend a scientific legitimacy to the whole enterprise. 

The actual effect of the plan was to reduce real per capita income by half, and starve millions on what Stalin referred to as the ‘agricultural front’.  Even so, he declared the first five-year plan a success 12 months early in 1932, and the second one started right away.

Not only were nearly all the figures falsified – something you can do during a reign of terror – but they carried within them a terrible authoritarianism to try to force them to be true.  Which is why strikes had to be redefined as ‘sabotage’, and why after 1939, employees had to be fired if they were once more than 20 minutes late for work.  It was also why one in eight of the Soviet population were either shot or sent to labour camp. 

We don’t do that kind of thing nowadays.  But we do measure our progress towards so-called ‘best practice’, locking local authorities into a dull and uninspired version of what may once have been best, but is now hopelessly second rate. 

And worse than Best Value is the hideous American concept of Best Available Technique Not Entailing Excessive Cost.  You don’t have to be Stalin to destroy innovation by counting.


And even more controversially, let me tentatively propose paradox No.9: When you count things, they get worse.

In quantum physics, the mere presence of the observer in sub-atomic particle experiments can change the results.  In anthropology, researchers have to report on their own cultural reactions as a way of offsetting the same effect.  And once you start looking at numbers you keep falling over a strange phenomenon, which is that the official statistics tend to get worse when society is worried about something.

UK child abuse statistics stuck at the 1,500-a-year mark until 1984, when unprecedented publicity catapulted the issue to the top of the public agenda.  Between 1984 and 85, child sex abuse cases shot up by 90 per cent.  And in the following year they did the same again. 

Now, campaigners would say that the actual rate of abuse is never reflected properly in the statistics.  I’m sure they’re right.  All I am saying is that the statistics wouldn’t have told you anything, except how strongly the public felt about it at the time. 

So often, the statistics start rising after the panic, rather than the other way round.  You can see the same phenomenon recently in the rise in race attacks, in asthma, in depression and in a range of other social problems we fear the most.

And it’s difficult to know quite why.  Sometimes the definitions change to reflect greater public concern.  Sometimes people just report more instances of it because it’s in the forefront of their minds.  Sometimes they borrow the category as a useful place to hang phenomena they hadn’t defined before.  But sometimes, maybe, what we fear the most comes to pass.

And finally, you’ll be relieved to hear: paradox No. 10: The more sophisticated you aspire to be, the less you can measure.  This is true for politicians who try to measure the elusive source of ‘feel good’ in their populations, and long for the days when they could just measure wages.  And it’s true for the doctors who used to measure corpuscles, but know there’s some other kind of psychic health that allows people to recover from operations better – but which they can’t count under the microscope.  But it’s most urgent for business.

Business leaders increasingly recognise that their key assets are intangible, and extremely hard to measure directly – like knowledge, information or reputation.  Count up the value of their fixed assets and you come up with a figure wildly different from the actual value of their company on the world markets.  Microsoft’s balance sheet famously used to list concrete assets worth only six per cent of what the company was worth.

What’s really important can’t be measured.  Yet if your competitors are going to try, and benefit from it, then you have to try too.  That’s why the whole issue of measurement is so topical and why the unmeasurable is coming under such pressure to squeeze itself into a countable shape.


So why count at all?  The answer is that numbers are an absolutely vital tool for human progress.

They mean we can test hypotheses, seek out what’s fraudulent and inefficient.  They give us some control over an unpredictable world.  They allow us to take problems by surprise, so to speak – to force ourselves to look at them in new ways.  And they can prove us wrong. 

It’s just that they’re not as objective as they seem, and – by themselves – they can’t tell us what causes what.  That requires the judgement, common sense and intuition you need for deduction.

What can we do about all this?  Well, I think we might try something that sounds a bit contradictory.  We can try measuring more and we can try measuring less both at the same time.

Measuring more to rid us of the way that bottom lines, of all kinds, twist the result.  The educationalist Howard Gardner pointed out that there are many different kinds of IQ, for example, and the same is true of business success, quality of life, health and so many of those elusive ideas that just refuse to be measured.  That means multiple bottom lines at least.

But we can also recognise how single measures are about control.  We can encourage people to measure locally what they think is important, not what they’re told to measure.

We can de-standardise the counting process, so that the subjects of measurement do their own measuring – the pupils, the patients, the poor.  Like they do in one Latin American city that measures air pollution by the number of days you can see the Andes from the city centre.

That’s a measure that can inspire and involve people, when measuring ozone parts per million has to be done silently in a laboratory by a technician.

Now, I know it’s accepted wisdom to say the real answer is distinguishing between measuring outputs and outcomes.  But I think this still begs the question. 

What is the outcome of Britain’s National Health Service for example?  The number of patients successfully treated?  Or is it the health of the population?  The statistics for those would be diametrically different.  And that’s the point: even outcome measurements assume that our institutions should be permanent.  They are about organisational control.  They don’t let us imagine whether we might be better off without institutions of that kind at all.

And outcome measurements that you can’t make a difference to kind of miss the point.  The Environment Agency, for example, now measures its success partly by the rise in sea levels they’ve managed to avoid – which they have very little power over.  I seem to remember King Canute tried a similar indicator himself.

So don’t let’s give up on the idea of measuring less.  Doctors or development economists will tell you they know very quickly what’s wrong with a patient or economy – but then have to spend a great deal of public money collecting the figures to persuade anybody else. 

Measuring less saves money.  But it also requires considerable faith in other people, and that’s in very short supply.  It means giving more hands-on experience to teachers, managers, civil servants and police.  It means lecturing less and listening more.  It means decentralising power to smaller, human-scale institutions, to face-to-face managers.  It means more of Tom Peters’ famous description of effective business: Management By Walking Around. 

Most of all, it means practising using our imagination and intuition.

A world where we count more is stricter and sometimes fairer, but it has less life than a world where we count less.  When we count less and get it wrong, we risk inefficiency, bigotry, ignorance and disaster.  But when we count less and get it right, we probably get closer to humanity than we can any other way.  Human beings can deal with a complex world better than any system or series of measurements.


Measuring less means telling stories more and it means asking questions.  Telling stories, because they can often communicate complex, paradoxical truths better than figures.  Asking questions because they can devastate political statistics. 

Yes, the carbon monoxide rate’s gone down, but is the air cleaner?  Yes, our university professors have produced a record number of published papers, but is their teaching any good?  Numbers and measurements are as vulnerable as the Emperor’s New Clothes to the incisive, intuitive human question. 


The closer any of us get to measuring what’s really important, the more it escapes us, yet we can recognise it sometimes in an instant.  Relying on that instant a bit more, and our ability to grasp it, is probably the best hope for us all.

Let me end by quoting Dee Hock, the founder of Visa: “In the years ahead, we must get beyond numbers and the language of mathematics to understand, evaluate and account for such intangibles as learning, intellectual capital, community, beliefs and principles, or the stories we tell of our tribe’s values and prosperity will be increasingly false.”