Why life's too short for evidence-based policy
Speech to NCVO/VSSN Researching the Sector conference, Warwick University, 13 September 2006
Just less than a year ago, the government’s Respect Tsar Louise Casey was secretly recorded at a private dinner uttering the following unministerial phrase: “If No. 10 says bloody 'evidence-based policy' to me once more I'll deck them.”
What I wanted to do today was to delve a little further into this remark. What, after all, is wrong with evidence-based policy? It must be better than policy based on hunch. Or is it?
And yet anyone who flies a bit too close to the official mind will realise that there’s a misunderstanding in government these days about what actually constitutes evidence.
If you listen to the whisperings among policy-makers in the voluntary sector, as elsewhere, you find they’re on the horns of a dilemma about who to blame – for almost everything actually.
On the one hand you hear them complaining that policy is pushed through without a scrap of evidence to support it.
The DTI consultants who recommended extending Sunday trading – mercifully rejected by ministers this time – carefully measured the economic benefits very precisely, but ignored all the disbenefits completely.
There’s a lot of that sort of thing in government.
I know one study, admittedly rather informal, that found – for all the talk of evidence-based policy – what really influenced ministers was a human anecdote told to them at a critical moment. And on such flimsy grounds, billions of pounds has been shifted around the public sector many times.
There are those who say that the one sure way of influencing government policy at the moment is to insert something in a speech by David Cameron, but I’ve got no evidence for that either.
I’ll come back to the business of stories later, because I’m not sure that – used in the right way – this isn’t more positive than we think.
But back to the dilemma. On the other horn is the tyranny of evidence, a hard-headed business where sceptical men – it often is men – wait endlessly for the proof that never arrives.
Like somebody said of Bertrand Russell: they’ve have had an open-mind for so long that they can’t get the damn things shut.
Which is why, when you look a little further, to see what evidence actually lies beneath the most hard-headed government departments, you sometimes find very little.
Those involved in setting up the National Institute of Clinical Excellence tell me they were astonished at how patchy the medical evidence was, even for treatments we take for granted.
The Treasury’s Invest to Save programme accepted the argument for an ambitious investment in youth services in East Anglia on the grounds that a riot in Peterborough, if they could prevent it, would otherwise cost precisely £12 million.
But I would hate anyone to think that, by gently teasing the hard-headed people in government, that I believe the evidence they want is possible. I don’t and it isn’t. And the sooner they understand this, the better policy we would get.
And nowhere is this more vital, it seems to me, than in the voluntary sector, because this impossible evidence is demanded of us more than most.
Thanks partly to the kind of management consultants ministers fall prey to, our government – surely the most utilitarian since the embalming of Jeremy Bentham – is now gripped by what you might call the McKinsey Fallacy.
“Everything can be counted, and what can be counted can be managed.” That’s the McKinsey slogan.
The truth is that everything that is most important – love, health, education, care – can’t be measured, so what is less important gets managed.
That’s the first reason why evidence-based policy is so elusive. Simply because it’s so hard to measure what’s really important, governments and institutions pin down something else. And all their resources get focused on achieving something they didn’t quite intend.
We all know how this works. We test children in schools so intensively that they are simply taught to pass the tests. They get tested on comprehension passages rather than taught to read stories.
We congratulate the NHS on the rise of prescriptions they issue, without asking whether that’s related to health.
It stands in the great tradition of the Victorian statisticians who tried to measure the morality of children by counting the number of hymns they knew by heart.
Two other reasons why the evidence is never quite forthcoming.
One is that controlling people with numbers never works.
The principle that numerical measurements will always be inaccurate if they are used like this is now known as Goodhart’s Law.
The reason is that, however incompetent staff may be, they will always be skilful enough to make targets work for them rather than against them.
Take for example, the rule that patients shouldn’t be kept on hospital trolleys for more than four hours. In practice, some hospitals have got round this by putting them in chairs. Others have bought more expensive kinds of trolleys and re-designated them as ‘mobile beds’.
My own park in Crystal Palace is being built on, partly because LDA believes it is only used by poor people – rich people’s parks don’t get built on in my experience. And partly because it will go into the statistics for brownfield development.
This is a fundamental flaw in the target culture, and social researchers find themselves drawn inadvertently into collusion with it.
Then there’s the problem of how to interpret the numbers.
No matter how many screeds of figures are available, they will never tell you what causes what.
There are the same number of marriages each year now as there were in the 1890s, despite the higher population. Is that a sign of failing moral standards – or something else?
“London is too full of fogs and … serious people,” said Oscar Wilde. “Whether the fogs produce the serious people, or whether the serious people produce the fogs, I don’t know.”
We do know, of course. But we use our common sense, judgement and intuition. That’s what distinguishes good researchers from the herd.
But there’s a fantasy at the heart of evidence-based policy that these things are like the dials of a gigantic machine, which hums away without the need for contentious human interpretation.
That’s the problem with it. In practice, it means complete inactivity while we wait for evidence that’s never going to arrive. It’s an excuse for doing nothing at all.
Yes, I know, the accepted answer is that we must distinguish between measuring outputs and outcomes, but I’m sceptical about this too.
What’s the outcome of the NHS for example? The number of patients successfully treated? Or is it the health of the population? Because that’s diametrically different.
Outcome measurements assume that our institutions should be permanent. They’re about organisational control. They don’t let us imagine whether we might be better off with different institutions instead.
Real outcome measurements – if you can find them – are usually outside the control of institutions anyway.
The Environment Agency, for example, now measures its success partly by the rise in sea levels – which they have almost not power to affect one way or the other.
In fact, King Canute tried a similar indicator himself.
Targets are the offspring, partly of the early utilitarians in the 1830s, partly of French cost-benefit pioneers in the 1840s, partly of American contract culture of the 1990s.
But a decade on, there are signs of softening. Ofsted is turning itself into a mentoring organisation. Ruth Kelly claims she’s rowing back on the 105,000 targets that cover England and Wales.
But there’s a new twist to targets that, because it’s emerging from regional government, is particularly corrosive for the voluntary sector.
It’s where targets get linked directly to funding, and are therefore the object of serious battles between voluntary organisations which ought to be co-operating.
To give one example: a time bank in south London.
For those who don’t know about time banks, they’re mutual volunteering organisations which seem to be able to involve hard-to-reach sections of the population.
That’s their purpose. So it makes sense for them to team up with other organisations to help both achieve their objectives.
This time bank teamed up very sensibly and successfully with a local Healthy Living Network, through which its funds were also channelled from their New Deal for Communities.
But at the end of the year, when the inevitable accounting takes place, who had achieved this throughput of clients?
They both had, of course, but because the targets are now linked to money – it would have been double-counting to say so, and therefore fraudulent. So with more clout, the Healthy Living Network claimed the outputs, and the time bank had its funding stopped.
It’s a fascinating example of how outputs are actually turning into money.
Take for example, the strange looking-glass world of green energy. If you want to buy green energy from some of the bigger electricity supply companies, they’ll sell it to you at a premium.
They will in fact buy it in the Netherlands, strip the greenness off it and sell that aspect in London.
You may ask whether the Dutch green energy, generated by wind power, is then sold as ordinary fossil fuel energy in Holland, and of course – we don’t know. But rather suspect not.
That’s the way targets are going. The numbers become a kind of currency, which is why regional agencies find themselves talking about ‘buying outputs’.
What they mean here is not quite what electricity companies mean. They mean funnelling their money, not where it’s actually needed, not where it’s most effective or sustainable or beneficial. But where they can quickly make up the required outputs they have to deliver.
Or worse, considering Goodhart’s Law, which they seem to deliver. I’m thinking of the advice given one of our projects at the new economics foundation at a training event: “If you get any couples, mark them both down as women, because we’re short of those.”
This is the world the voluntary sector has stumbled into by accepting the task of delivering government services – or should I say delivering outputs.
It’s a delusory world where nothing is quite what it seems. Where outputs are more important than achievements. Where every charity has targets imposed by funders who have their own miserable yoke of targets to deliver as well.
Big fleas have little fleas upon their backs to bite them.
And each time the bites are passed down, they get tougher and more intransigent. So while the mandarins at the Treasury can be relatively relaxed about the standard of proof they require before acting, those Whitehall targets descend via funder to funder.
Until they reach the bottom flea – the poor charity which has to make something happen on the ground – by which time they have become a gradgrindian nightmare which bear no relation to reality.
“If you can’t prove it, we can’t pay for it,” one of our time banks was told by their funder, the ultimate beneficiary of the Stalinist systems that emanate from the Government Office for London.
But they can’t prove it. How can they prove it? All they can do is desperately mould the reality into the targets.
And as voluntary sector researchers, we are expected to collude with this Kafka-esque world, where some dishonesty is a basic requirement for achieving anything at all.
With a bizarre health and safety regime that seems determined to eradicate action of any kind.
All under the auspices of funders with no interest in what you actually do.
Even the blessed Lottery will pay you no attention at all. They’ll not be pleased at your successes any more than they’ll be interested in your difficulties – as long as you fill in the monitoring form that predicts precisely the racial make-up of everyone you’ll affect in three year’s time.
And here I’d like to make a wholly unproven assertion. It is evidence-based, but it’s the evidence of experience rather than measurement.
It’s true in business, communities, public services and government – perhaps most of all in the voluntary sector – but our masters prefer not to think about it.
If you employ imaginative and effective people at local level and give them the freedom to innovate, they will succeed – no matter what the programme. If you don’t, they will fail, no matter what the programme.
I call this the People Principle.
I remember listening at a conference to Debbie Sanderson, the headteacher behind the original Extended Schools programme.
She explained how she’d arrived on her first day as head, to hear a knock on the office door, and her secretary explained that the angry mother outside had just been prosecuted for assaulting the deputy head.
Two year’s later, that mother was head of the school’s anti-social behaviour unit, and her friends are in charge of other aspects. And they’re paid in chocolate coins.
It’s a brilliant example of just what an innovative individual can achieve by building effective relationships with those around her.
The next speaker was the government official charged with rolling Extended Schools out across the region, and a couple of minutes listening to him made you realise why they would fail.
That is in fact why governments usually fail. Why they waste vast sums but leave the basic problems unaffected. They forget the human factor and don’t value it. Often, they actively frustrate it.
Two generations ago, the People Principle was an obvious piece of common sense.
Now, officials go about solving problems like this.
First they turn them into abstractions, removing all those dull and mundane specifics and what they probably believe are miserably irrelevant details.
Next they formulate some abstract maxims that can apply to any situation or any community.
Then they appoint some worthy from the public sector who can be trusted to put those maxims into effect without regard to local peculiarities.
Then they assign narrow numerical measures to every aspect of the abstraction, and convince themselves you can somehow pin down the progress.
What do they achieve? Very little, though the metrics appear otherwise.
I caricature a little, I know. But that’s the legacy of the McKinsey Fallacy. Lots of activity; little change.
I believe these are the ingredients of a successful voluntary sector: People, Imagination, Relationships, Respect for individual details, those human aspects that metrics can’t measure.
And it doesn’t matter where you’re working, or who you’re dealing with, the details of the situation will burst through any available target, metric or statistical definition.
That’s what makes change possible. Yet the voluntary sector is being moulded into a shape which downgrades all of them.
So yes, I do feel angry about this. When I hear about probation officers responsible for 600 clients each, or hospitals where you never see the same doctor twice, and watch the possibility of acting on the world slipping through the fingers of the public sector.
And then watch the voluntary sector forced by their funders to plunge in the same direction.
Rather unwisely, in a previous talk I gave on this subject, also to the voluntary sector research community, I found myself talking about standing up to ‘the enemy’.
Of course, I wouldn’t describe the government as the enemy. I wouldn’t want to fall foul of the War on Terror.
But there is an attitude which has parts of the government in its grip – which I would define myself as resisting. It’s a technocratic delusion that believes the vital human work we do as a sector can be boiled down to countable metrics.
That human relationships have no place in the planning or delivery of services, charitable or otherwise.
This kind of thinking, if it spreads much further – if it embraces more of the voluntary sector – will mean we may have to re-invent the whole idea of charity from scratch and start again.
I’m not saying that the voluntary sector, even when it’s delivering public outputs, does no good work. Of course it does.
Nor am I saying that numbers play no role in evaluation. Of course they do. They can puncture assumptions and take us by surprise.
I am saying that real change is best achieved in small institutions, in human-scale relationships, and – since we’re on the subject – when those are reciprocal relationships too.
We face a regime that not only doesn’t understand that, but is dedicated to rooting out the very idea.
That’s why I find the revelation that ministers take decisions after hearing appropriate stories rather heartening. Or I would if they didn’t expect everyone else to behave differently.
Because stories convey complex human situations in a way that numbers never can.
So that’s one recommendation, if you want it.
Not just stories, either, but other forms of evaluation developed by my colleagues at the new economics foundation and others. Evaluation that starts from where people actually are and what they feel.
Then, we’ve got to tackle the centralisation of power. Empires require measurement systems to control their outlying departments which they dare not trust to take their own decisions.
Numbers are the tools of empire.
Decentralisation can start rolling back the contradictions at the heart of evidence-based policy. We need school league tables that much less if we know our local headteacher.
Though we do clearly have to develop new forms of local accountability. And if the voluntary sector doesn’t do that, no-one will.
Third, tell the truth to power. Refuse, if we possibly can, the targets that are imposed on us. Explain again and again what kind of evidence is possible and meaningful.
Fourth, ask questions, because they can devastate targets.
Yes, the carbon monoxide rate’s gone down, but is the air cleaner? Yes, our university professors have produced a record number of published papers, but is their teaching any good? Numbers and measurements are as vulnerable as the Emperor's New Clothes to the incisive, intuitive question.
And finally, tell the truth about the importance of being generally right rather than precisely wrong.
As social researchers, we need to provide evidence. But it needn’t involve numbers. And we also have to remember we’re part of a wider movement with humanity and relationships at its heart.
The first accountant in the USA was a man called James Anyon, originally from Liverpool. When he was very old, in 1912, he gave a lecture to young accountants, which the doyens of evidence-based policy need to read.
“Use figures as little as you can,” he urged. “Remember your client doesn’t like or want them, he wants brains. Think and act upon facts, truths and principles and regard figures only as things to express these, and so proceeding you are likely to become a great accountant and a credit to one of the truest and finest professions in the land.”
That’s what it’s all about. Save the brains. Save the human. And only we can do it.