Adopting ethical AI in policing

In the third and final instalment in this series of articles, Matt Palmer, Product Director, Public Safety at NEC Software Solutions, looks at four essential steps forces can take to ensure their use of AI complies with their ethical standards.

Sep 2, 2025

Four essential steps to adopting ethical AI in policing

There are many ways the police service can benefit from the extraordinary powers of AI.

In fact, AI enabled technology is already helping police forces work more efficiently, saving time for officers so they can carry out the people focused roles they joined the force to do.

AI in policing is so well established that according to the National Police Chief’s Council (NPCC), all NPCC forces have adopted data analytics to some extent. At least 15 forces are also using advanced data analytics tools to make recommendations from complex data.

Some of the essential policing functions where AI is having a positive impact include automating data quality control, and triaging incoming 999/101 calls. With its ability to process vast amounts of data more quickly than the human brain, AI is becoming an essential part of the policing toolkit.

There are also situations where AI is able to detect patterns from data, and predict possible outcomes. For example, using machine learning to pull together historical data to assess how likely it is that an individual might reoffend, or how vulnerable a person might be to grooming behaviour.

While AI technologies offer exciting opportunities to support the police in tackling crime and safeguarding the public, there are risks involved in relying too heavily on technology to make decisions which can affect people’s lives.

To overcome these risks and make sure AI remains a force for good, policing must take an ethical approach to using AI.

There are some important steps the police can take to ensure it uses AI responsibly and ethically.

  1. Demonstrate responsible data management

AI needs big data to train and develop its models. However, collecting and storing vast amounts of personal data presents risks such as the possibility of data breaches and misuse of information.

To ensure personal data is used ethically, police forces should introduce strict policies to determine which personnel are authorised to access data, and how that data will be used.

Police forces can demonstrate that they are complying with data protection laws such as GDPR which regulate the collection, processing and storage of personal data. It is also important to have an incident response plan in place, which sets out protocols for responding to, and mitigating, data breaches.

As the public become increasingly aware of AI in policing, there has been a growing unease about the extent to which people are monitored through AI-powered surveillance tools. The police can help to reassure citizens by disclosing how these tools are used, as well as the criteria they apply to the way personal data is held.

For instance, explaining that facial recognition technology used at a railway station to check people’s faces against a database of wanted criminals, will automatically delete or anonymise the images of those who are not considered a match.

If people understand why forces use data to prevent crime and keep people safe, they will have more trust and confidence that the data will be used ethically.

Data is essential to modern policing, but it’s always important to reinforce the message that AI doesn’t make decisions based on the data. Decision-making stays firmly in the hands of the humans.

  1. Mitigate the risk of discrimination

One of the key concerns about AI in policing is the risk of bias in the data. All AI systems learn from initial training data, and if that data contains bias, there is the possibility of the bias becoming baked into AI models. This can reinforce discrimination and influence human decision-making.

Take for example, a predictive tool trained on historical arrest data where human bias is present. If unchecked, this could lead to the algorithm replicating discriminatory patterns such as disproportionately targeting minority communities.

To avoid this happening, the police and the AI developers should work together to conduct routine bias assessments, to see if there are any disparities in how people from different racial, ethnic and socioeconomic groups are treated.

This form of regular, human monitoring can alert police officers to any discrimination. It also prevents negative reinforcement loops, where biased data which hasn’t been checked, is automatically fed back into the model.

To access more detailed guidance on equality and non-discrimination, police forces can consult the AI Toolkit from INTERPOL and UNICRI, which provides practical examples of how to mitigate the risk of discrimination along the AI life cycle.

These include raising AI users’ awareness of negative stereotypes, employing teams of developers with a diverse representation of characteristics, and training humans to question and verify AI outputs.

  1. Make AI transparent and explainable

The challenge with demonstrating an ethical approach is that AI systems are complex and difficult to understand – even for those who use it. This is why explainability is so important.

When AI presents results to the police, there must be an explanation as to how the AI arrived at those results, so the police can make an informed decision about how, or indeed whether, to act.

It is good practice for application developers to have clear audit trails which keep track of how AI technology is trained and developed, and the decisions made as a result. If forces keep a detailed log of their AI systems, it is easier to identify any malicious intent, such as a rogue developer who has introduced deliberate bias into a machine learning model.

Today’s police service is a major user of science and technology solutions, and police forces buy technology from a wide range of industry suppliers. This makes it important that the technology companies which supply the police are committed to supporting the ethical use of AI.

To ensure organisations like the police select their suppliers responsibly, the EU AI Act was introduced to limit the sale of AI products to those which comply with its regulations on AI development and use. Other regions are likely to introduce similar legislation to govern responsible AI procurement.

Similarly, the Police Industry Charter, launched in spring 2024, provides a new set of standards for policing technology, which encourages policing and industry to work in close partnership to improve public services. Organisations which sign up to the charter have to agree that all their products and services are transparent by default.

These standards and regulations help to encourage the police and their suppliers to work together to adopt an ethical approach to developing and using AI.

  1. Trust in human judgement

AI systems are not infallible. They can produce false positives, like incorrectly identifying innocent people as suspects. Equally, they can deliver false negatives and fail to identify genuine criminals. AI could unwittingly bring about miscarriages of justice in the absence of human judgement.

There must always be safeguards which ensure police decisions which affect people’s lives are made by a human and not by technology. The ethical use of AI is about supporting officers, so they can make an informed decision.

AI capability is evolving rapidly, and it offers great potential to support police work, as long as it is used responsibly. Setting KPIs to evaluate the effectiveness and fairness of systems can help with this, as well as listening to feedback from officers and the public.

By combining the power of AI with the insight of the human mind, policing will be well placed to keep people and communities safe, now and in the future.

 

More information about how the ethical use of AI can support policing can be found in this white paper: Artificial and human intelligence: partners against crime available to download here: https://necsws33784.ac-page.com/ai-guide-download

 

Related Features

Select Vacancies

Copyright © 2025 Police Professional