Computer says no bail – does AI reduce or increase human bias?
Does artificial intelligence (AI) – the ability of a machine or a computer program to think and learn – have any role in criminal justice? It’s so little used in the UK that it has not caused much controversy. But in the USA it has gained popularity in helping make decisions as to whether defendants should get bail or be remanded in custody.
Most US states have run a terrible system whereby bail is only for those who can afford it – a significant cash sum has to be offered to the court if someone wants to await their trial in the community. Those who can’t afford to pay a “bail bond” end up behind bars. There has been a long running and successful campaign to prevent courts exacting money in return for pre-trial freedom. But in reforming their bail system, many states have turned to AI to help them work out the risk of granting bail. The two main risks are that the defendant will re-offend while on bail or not turn up to court. So charities and others have created computer programmes to predict these risks based on a defendant’s criminal history and characteristics.
In New Jersey where this programme has been used extensively, and bail bonds have been phased out, they have managed to reduce the use of remand by 16%. But use of such programmes is still controversial because some suspect that the algorithms embed existing biases in the system. “My concern [about using the tools] is that what you could have is essentially racial profiling 2.0,” said Vincent Southerland, the executive director of the Center on Race, Inequality, and the Law at the New York University Law School, which signed onto the statement. “We’re forecasting what some individuals may do based on what groups they’re associated with have done in the past.” Some activists also worry that even in jurisdictions that have adopted the tools in good faith, judges may not follow their suggestions in setting bail or other pretrial conditions, and the new systems may go unscrutinised because communities assume any problems have been fixed.
The ACLU, the powerful American civil liberties NGO, is in principle against the use of such computer programmes but, given their widespread use, they also promote principles to adopt for those working with the programmes. These include
- If in use, a pre-trial risk assessment instrument must be designed and implemented in ways that reduce and ultimately eliminate unwarranted racial disparities across the criminal justice system.
- Neither pre-trial detention nor conditions of supervision should ever be imposed, except through an individualised, adversarial hearing.
- If in use, a pre-trial risk assessment instrument must be transparent, independently validated, and open to challenge by an accused person’s counsel. The design and structure of such tools must be transparent and accessible to the public.
I agree with these principles, but slightly disagree with the ACLU’s desire to ban such programmes – they appear to be helping reduce the use of cash bail. And if US courts are anything like those in England and Wales, traditional human remand decision-making is pretty poor. Judges and magistrates are risk averse and the prosecution’s view (based on often thin evidence) of whether someone is at high risk of reoffending/not turning up at court is relied on. The Lammy review of ethnic disproportionality in the criminal justice system suggested that defendants from BAME communities are particularly likely to end up on remand, so we can’t rule out unconscious bias in prosecutors and judges in England and Wales.
There is good evidence that judges can be swayed by all kinds of factors, including the demeanour of the defendant, whether they themselves have just had lunch and whether their favourite football team recently won or lost. At least an AI programme doesn’t incorporate such random factors. And if its algorithm is publicly available, everyone can see what information affects the risk rating generated.
Everyone in England and Wales seems pretty wary of using AI to help court decision-making. But it is being used to help police decide who shouldn’t go to court. In Durham the police are pioneering “deferred prosecution” whereby some people who have been arrested and charged by the police are offered a choice of either doing a “rehabilitation” programme or being prosecuted in court, and taking the consequences. Custody officers need criteria to help screen out those defendants who should not be offered this choice, and who should definitely be prosecuted. These criteria have been incorporated into an algorithm which the custody staff use to aid their decision-making. The Harm Risk Assessment Tool (HART) has been developed by Cambridge University academics to predict the likelihood that an individual will reoffend in the next two years. The Chief Constable of Durham emphasises that it is simply meant to help decision making and reduce the risk of a serious re-offence. The force are currently piloting the tool and testing whether police officers do better than the algorithm. The Chief Constable has joked that he was pleased that “one or two people had taken issue” with the HART pilot – “because that means you trust the cops”.
Who knows what the future holds for artificial intelligence in criminal justice but I think it has potential. While so much conscious and unconscious bias operates in human decision making, a well designed algorithm may help produce more consistency and even reduce bias.
A great panel (including yours truly) will be discussing AI at the Public Law Project conference on judicial review trends and forecasts on 16th October.