Skip to main content


Link Copied

If we don’t know the impact of any programme, we shouldn’t keep on running it

Penelope Gibbs
02 Jul 2017

Learning from failure is good, but not if it takes over twenty years to do so, and creates more victims of sex crimes in the mean time.  A programme to treat sex offenders was established in prisons in 1991. This programme was based on specially designed cognitive behaviour programmes which had been successful in Canada.  If you adopt someone else’s programme, it is crucial to implement it faithfully, and to measure the impact. Neither happened with the sex offender treatment programmes run in prisons in England and Wales.  People who were not properly trained delivered the programmes and, although some outsiders raised doubts as to whether they were working, an evaluation was not produced till last year. This showed that those who had attended the programmes 2001-2012 were 25% more likely to reoffend than those who had not, ie the programme was doing harm.  The recidivism of sex offenders is low anyway, so this did not result in many more sex offences.  But the programme failed, and that failure cost time, money and victims.

There are several things I don’t understand about the evaluation of the programme.  Why did it take so many years to decide to evaluate outcomes?  Why was the evaluation report not published? Unfortunately, the whole episode is not a one off.  It happened because of a kind of group think which resists the gathering and analysis of evidence, and resists challenge.

The inspectorate recently reported on the through the gates” programmes set up to help prisoners on release.  These programmes were found to have no positive impact on ex-prisoners’ life chances.  Thank heaven for the inspectorate, but it is surely not their job to evaluate whether services are properly conceived?

Justice is littered with areas where we have no data, and with programmes which are not properly evaluated. To my knowledge, the effectiveness of the intensive supervision and surveillance programme (ISS) for under 18 year olds who have been convicted of serious offences, has not been evaluated in ten years. So we have no idea whether it works. The post programme reoffending level is high – so that’s an indication it doesn’t.

Most of virtual justice is an evidence free zone.  The last research done on video hearings from prison was published in 1999, and did not properly evaluate either the comparative costs, or the justice outcomes of video versus “real” court hearings (because it was not designed to do so).  The most recent research on police station to court video hearings was published in 2010.  An evaluation of pre-recording the evidence of vulnerable witnesses (section 28) was published last year. This was a process evaluation – which means it did not compare the pre-recording with an alternative approach, nor assess outcomes. This means we have no idea whether witnesses had better experiences doing it this way, than if they had given evidence by video “live”, or actually in court.  We also have no idea what impact this approach had on outcomes.  Guilty pleas and convictions were monitored, but the results cannot be used because the researchers had insufficient data to draw conclusions.  Despite this, senior judges and ministers have referred to the programme as successful, on the basis of this evaluation.

Research is not foolproof and good research is expensive. Inspections are a snapshot. But it is cheaper to fund research and pay attention to its results than to establish and perpetuate programmes which are counter-productive.  The sex offender treatment programme has been halted. But what about the rest of the many prison and community based rehabilitation programmes? And what on earth are we doing expanding video hearings for witnesses and defendants, age 10 upwards, without having any idea of outcomes?