Getting onboard with Artificial Intelligence
In the first of a series of articles, Matt Palmer, Product Director, Public Safety at NEC Software Solutions, suggests that the more open we are about AI in policing, the more confident the public will be in its use.
When AI first gained a foothold in the public consciousness, people were quick to express their fears about how robots would steal our jobs, or machines would de-skill the human race.
Since then, we’ve all become much more aware of the benefits of AI – how it can streamline business processes, enable earlier diagnosis of disease, or predict the impact of our carbon footprint on the climate.
Meanwhile our own experiences of AI have evolved. We’re increasingly using AI enabled tools in our everyday activities without even thinking about it, from the face ID on our phones to the digital voice assistants in our homes.
However, when people think about how AI could be used in policing, opinions are more divided. Policing is fundamentally a human service, dealing with very human actions and intentions. Where could AI possibly fit in?
Signs of public support for AI in policing
According to the UK Government’s 2024 report into public attitudes to data and AI, 44% of people think AI will have a positive impact on crime prevention and detection, while only 19% foresee a negative impact.
This comes somewhere between the 51% who think AI will be positive for healthcare, and the 39% who see its benefits for education. Given this context, it would appear that people are cautiously optimistic about the role of AI in policing.
These findings were also reflected in a UNICRI international report exploring public perceptions of police use of AI. The report recognises that there’s public support of AI for solving certain crimes, but this comes with significant caveats, particularly in the area of predictive policing and autonomous decision-making.
The report concludes that people believe AI should enhance, not replace, human judgement. They are, of course, absolutely correct.
We now need to take the opportunity to help people understand more about how AI can assist the police, because this will increase the levels of public support.
1. Explain how AI supports human decision-making
Although predictive policing is an area of concern for the public, using AI to make predictions does not mean leaving it down to the machines to decide who to detain and who to release.
The role of AI is to provide information for the decision-making process.
Police forces have vast amounts of data which they can use to identify vulnerable people or pinpoint potential crime hotspots. Unfortunately, officers don’t have the time or the resources to sift through this data and use it to decide where to focus their attention.
AI can do the legwork by analysing the data, spotting patterns and making predictions. It’s then over to the human to interpret it ethically and make decisions on which actions to take.
When the police are assessing which members of the public need safeguarding, AI can assist by pulling together all the information on an individual which sits in disparate systems. This might include previous social care referrals, risk of homelessness status or exposure to people linked with child exploitation.
If these separate threads tie together to indicate someone is vulnerable, officers can decide if they need to intervene and protect that person.
Similarly, AI can join the dots from a vast number of variables to highlight trouble spots more accurately. The technology can find trends in crime data and cross reference them with factors such as time, weather and events to predict which areas are more likely to see trouble that day. This helps officers make decisions about how to allocate resources.
Making the public aware of the critical part AI plays in predictive policing will reassure people that the decisions rest very firmly with the humans.
2. Deal with ethical questions head on
As the public becomes more aware of the risks and benefits of using AI in policing, it will be increasingly important to engage in open dialogue about what their concerns are.
The difficulty is that many AI systems are not instantly transparent or understandable, even to the people using them, due to their ‘black box’ nature. Explainability is the key to demystifying AI, both to the police and to the public.
When AI presents any results to the police, there must be an explanation for the reasoning behind those results. That’s why it’s good practice for AI developers to have clear audit trails which keep track of how the technology is trained, and to work within recommended frameworks, such as the INTERPOL and UNICRI AI Toolkit.
One particular area of public concern is the possibility of bias and discrimination becoming embedded in machine learning models. For instance, if predictive tools are trained on historical arrest data where bias is present, the algorithms can perpetuate biased patterns such as negative racial profiling and targeting of minority communities.
Going back to our earlier example of pinpointing crime hotspots, developers can use diverse and representative datasets to train AI systems, and continually test those systems for discriminatory patterns to avoid bias becoming baked into the algorithms.
Police forces can demonstrate they work with suppliers which test their models independently to confirm accuracy, non-bias and equality. Using AI as an assistive technology that is incorporated into traditional policing processes ensures the ‘human in the loop’ at the point decisions, actions and recommendations are made.
Legislation is also starting to have an impact on ensuring organisations like the police select their suppliers responsibly, with the EU AI Act limiting the sale of AI solutions to those which comply with its regulations on AI development and use. Other regions are likely to introduce similar legislation on responsible AI procurement.
If police forces can explain to the public how they are working with their suppliers to mitigate the risk of bias, it will help to restore public confidence.
3. Show the impact of AI in policing
Another way to gain public acceptance is to increase awareness of the real world impact AI is already having in law enforcement.
For instance, people often say they would like to see more of a police presence in their local area. If the public understood how much more time officers could spend in their neighbourhoods if they weren’t buried in paperwork, this would have a positive effect on public perception.
Identifying suspects and pursuing investigations takes time, resource and brain power. AI can support policing by processing vast quantities of information to find suspects more quickly. The technology can also help to inform human decisions about how to proceed.
When a crime is reported, but there’s no forensic evidence, witness statements or CCTV footage, the likelihood of a successful investigation can be slim. It can be expensive and time-consuming to investigate a crime without clear lines of enquiry.
However, technology can help to assess all the possible factors around a reported crime and use them to provide a solvability indicator. A human decision-maker can then review and decide whether to follow a case up, for example with house-to-house enquiries or by viewing doorbell footage.
In this way, AI helps the police to consolidate evidence much more efficiently, freeing humans up to determine how to act. At a time of limited resources, the time-saving potential of AI makes a significant difference to the country’s forces, and is a powerful message to communicate to the public.
As with all new technologies, there is bound to be some level of public concern about how AI will be used, particularly in the very human world of policing. However, by assuring the public that the highest ethical standards will always apply, policing can demonstrate the transformational impact of AI on relieving resourcing pressures and supporting human judgement.
More information about how the ethical use of AI can support policing can be found in this white paper: Artificial and human intelligence: partners against crime available to download here.