icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
11 May, 2017 15:10

RoboCop: Police use AI to judge whether suspects are jailed or bailed

RoboCop: Police use AI to judge whether suspects are jailed or bailed

Robot judges are coming. A police force in England is set to become the first in Europe to use artificial intelligence (AI) to assess whether suspects pose too high a risk to be set free.

Durham Police are set to introduce a harm assessment risk tool, known as Hart, to classify arrested individuals as of low, medium or high risk of reoffending if released.

The program should be launched in the summer and works by taking into account the suspect’s gender, postcode and offending history.

The system is the brainchild of Cambridge University computer scientists and was first road tested in 2013. Since then the algorithm has helped police classify the risk status of offenders by monitoring them for more than two years to test whether or not they reoffended.

The project was deemed such a success it will now be actually used in decision making.

“It’s not the ultimate decision maker, it is a support for the officers and the limitations are it’s only plugged into Durham Constabulary data — not any wider data,” said Durham Police head of criminal justice Sheena Urwin.

The custody sergeant must also consider other factors that they are obliged to consider as part of their statutory function. Certainly there’s interest. About four or five forces from all over the UK — it’s not surprising.”

The force believes the technology had been “validated” after some of its case studies showed that predictions that suspects were ‘low risk’ turned out to be accurate 98 percent of the time. Forecasts for ‘high risk’ suspects had an 88 percent accuracy rate.

However, some legal experts believe the breakthrough could have “dangerous” implications.

“This is a very dangerous step. By law, custody decisions must be made by human beings, taking complex circumstances into account but, in reality, custody sergeants will delegate responsibility to the algorithm. They will undoubtedly face questions from higher up if they choose to go against it,” said Law Society’s Criminal Law Committee member Richard Atkinson.

“How will suspects’ solicitors be able to challenge the algorithm, if they’re only given a lot of data and told this means their client is high-risk? There’s a serious issue that something that’s quasi-scientific is given undue weight and effectively becomes gospel. And where does this end up? Do we have algorithms making decisions in court?”

“It’s what’s known in statistics as an ecological fallacy to draw conclusions about someone’s risk from their postcode. There’s also a risk that lots of people mount legal challenges on the basis that the algorithm has wronged them and the system grinds to a halt,” added East London University geoinformation studies Professor Allan Brimicombe.

Similar programs are already operating in the United States. But the competency of the technology is highly disputed, after a study last year revealed that the AI was not immune to racial bias, with black people almost twice as likely to be falsely labeled as potential reoffenders.

Podcasts
0:00
27:41
0:00
27:21