The rise of artificial intelligence is being limited, Dr Lee Howells argues, and needs strong leadership to set the right operating and ethical framework for people to retain the upper hand over the machines.
The debate about the rise of artificial intelligence (AI) often focuses on the capabilities of the technology, rather than the capabilities and reactions of the people who will use it. That was one of the issues raised at a recent event on AI, prediction and criminal justice decision-making that we helped the Royal United Services Institute (RUSI) host.
All the expert speakers, including former Metropolitan Police Service Commissioner Lord Hogan-Howe and myself, agreed that the human factors are as, if not more, important than the technology in effectively implementing AI in the policing and justice sector.
Part of the problem is the ‘Hollywoodisation’ of AI. There is a view of machines on the march, riding over what we value in humanity, rather than one of AI enabling better decision-making and more fulfilling work.
That is one of the challenges facing police forces as they explore using algorithms to predict the risk of reoffending, allocate resources better and protect victims more effectively. As they try to use AI, they come up against the view that an algorithm could never be as good as a person, that an experienced gut feeling is more reliable, even when the evidence shows this is not true.
Part of the worry arises from the fact that AI decisions are based upon the data provided to train the algorithms and, as with all technology, there is the risk of a garbage in, garbage out problem.
In complex environments such as policing, there is the further question of whether the data can ever provide the whole picture and whether everyone agrees on what information is relevant. This is in addition to the known problems of data containing prejudice and bias, which we need to work hard to identify and remove.
A further challenge is that the data may contain examples of decisions made using ‘off-book’ knowledge, information that the algorithm does not have. Those decisions may have been correct, but without all the data an AI algorithm will not be able to discern a pattern to base future decisions on.
This, in part, is why it is often felt that justice cannot be delivered without the discretion of experienced people. There is a worry that AI’s decisions are too binary. This enhances the risk that useful predictive AI will not be implemented because it is held to a higher standard of accuracy than a human expert.
Then there is the question of how a police officer will respond to a prediction made by AI. Will they choose to ignore it? Will they use it only to back up what they were going to do anyway? Will they dare to challenge the machine? Or will a new generation of police officers lack the knowledge and confidence to make their own judgments?
For legal experts, the concerns about AI tend to focus on the need to provide reasons for decisions in ways people can understand, something algorithms cannot always do. AI often makes ‘black box’ decisions where the raw data and workings that conclusions are based on are not visible, auditable, or explainable. This presents a real challenge for law enforcement – which is increasingly required to maintain decision logs – and the courts.
So to gain acceptance, it seems a person will need to be involved in processes such as bail, sentencing, parole and resource allocation.
Regulation
As with any new technology, regulation is critical to dealing with issues of privacy, fairness, consent, proportionality, transparency and oversight. This includes being clear about when AI is used to support human decision-making and when it makes the decisions itself.
However good the regulation, there will be hard cases to resolve. We must work out how to deal with a situation in which a person disagrees with an AI recommendation and bad things happen. Equally, we need to understand the implications of any negative results from following an AI decision. This will create a host of issues around liability and professional standards. Yet there is a danger that if we focus too heavily on the potential negatives and the dilemmas AI presents, we will miss out on its potential to reduce harm.
Given the difficulties of implementation and the need to prove algorithms are accurate, fair and unbiased, there is a tendency to just wait and see. But postponement brings its own risks. There is a need to get started.
By starting, we will see that AI is not that different from other technology. It will do what we tell it to do, even if it is wrong, only faster and more systematically than we can.
AI could have a fundamental impact on the way we prevent and reduce crime, and could make us safer. But those benefits will be irrelevant if people refuse to use it. What is needed is strong, visible and informed leaders who can create confidence in its potential. They must ensure we create the right operating and ethical framework and understand that humans still hold the upper hand in the rise of the machines.
Dr Lee Howells is an AI and automation expert at PA Consulting Group.
Comment