Artificial intelligence 'marginally' better at predicting reoffending

Artificial intelligence is marginally more accurate than experienced police officers in predicting the likelihood of criminals reoffending, according to a pioneering study.

Jan 25, 2022
By Tony Thompson
Dr Geoffrey Barnes

Preliminary research into a two-year pilot programme run by Durham Constabulary, found the Harm Assessment Risk Tool (Hart) was slightly better overall at predicting reoffending rates than human custody sergeants.

As part of the study, overseen by Cambridge University affiliate Dr Geoffrey Barnes, both the machine-learning program and human custody officers assessed whether offenders brought into custody were at high, medium or low risk of reoffending.

Hart successfully predicted that 89.8 per cent of those who were actually brought back into custody for a serious offence within two years had been at high risk of reoffending, compared to an 81.2 per cent success rate for custody officers.

Of those predicted to be at low risk of reoffending, Hart got it right 78.8 per cent of the time, compared to 66.5 per cent accurate forecasting by humans.

The study was based on individuals who were brought into custody in County Durham and Darlington between September 2016 and October 2017, then followed up whether they reoffended and returned to custody within two years.

Across all indicators, the Hart model proved to be accurate 53.8 per cent of the time compared with a 52.2 per cent accuracy rate for custody officers.

In his preliminary report, Dr Barnes said: “These figures show the value of human decision making. Not surprisingly, experienced police officers do quite well at anticipating who will and will not be re-arrested in the future. At the same time, however, Hart’s algorithm did noticeably better than its human counterparts”.

The preliminary report goes on to say that Hart provides greater consistency than human decision-makers could produce, because it is based on more than 100,000 separate custody events.

Dr Barnes’s report added: “Human custody officers would take decades to gain exposure to this many different custody events, and each officer will, naturally and understandably, evolve different rules to guide their predictions.

“Each officer will attach different levels of importance to many varying factors when making decisions about offenders and their criminal cases. The Hart model therefore not only increased accuracy, but provided a consistent application of the same decision-making rules for every offender it evaluated.

“Human decision making is clearly worth listening to, and the Hart forecasts were never (and should never) be used as an automated replacement for human judgment. At the same time, these forecasts can be used to support our officers, challenge them when they disagree with the algorithm, and steer us towards greater consistency and fairness as our officers gain experience in their roles”.

Following the completion of the research project, Durham Constabulary stopped using Hart in September 2020 due to the resources required to constantly refine and refresh the model to comply with appropriate ethical and legal oversight and governance.

However, the force remains committed to research and understanding how it can best support its officers to provide an efficient, effective and fair service to its communities.

In its use of Hart, Durham Constabulary has contributed significantly to the debate around the ethical use and governance of such decision support models. In conducting this research, Durham Constabulary worked with other partners including Winchester University and Sheffield University to develop a legal ethical framework surrounding the use of Hart, which was presented to both House of Commons and House of Lords committees.

Dr Barnes is expected to publish his full academic study into the use of Hart in a final report at a date yet to be confirmed.

Related News

Select Vacancies

Assistant Chief Constable(s)

Police Service of Northern Ireland

Copyright © 2024 Police Professional