Intelligent response
Sam Sheppard explains how AI agents will give police force control room operators the context to ‘THRIVE’.
Modern police force control rooms are increasingly relying on automation to streamline processes, reduce pressure on operators and cope with rising demand in the face of resourcing challenges. While automation brings many advantages, it also has limits, and when pushed beyond them, can even become counter-productive. This is one of the reasons many public safety organisations are beginning to harness the potential of agentic artificial intelligence (AI) to support operators.
Automation works when control room operators can be freed from repetitive, mundane and clearly defined process-oriented tasks. Problems begin to emerge when the technology is being asked to operate beyond its capabilities. A prime example of this is keyword detection in telephone and online channel communication.
Systems designed to analyse unstructured data are adept at detecting keywords and phrases. This does have value in areas such as understanding demand trends and avoidable contact.
However, in terms of coordinating an appropriate response to a ‘live’ situation, the context in which the word is used is all important.
For example, an operator can instantly make the distinction between a call about a bomb and someone calling about an allergic reaction to a bath bomb.
However, as police force control rooms expand communication channels beyond phone and textphone (online portals, SMS, Live Chat, etc.) to make it easier for the public to report crime and emergencies, the need for systems to be able to understand how words relate is amplified. Without context, the word ‘bomb’ could automatically trigger escalation protocols, costly in both time and resources.
This is where assistive AI agents start to help. They do not replace automation, or the need for an operator, but provide context, which provides situational awareness and facilitates more informed decision-making.
This need to understand context is not limited to preventing unnecessary escalations. The use of AI agents can be a boon for THRIVE (Threat, Harm, Risk, Investigation, Vulnerability and Engagement) assessments. THRIVE has been in place since 2017. The College of Policing provides guidance in its four-step process for assessment:
1. Identify an individual’s vulnerability or vulnerabilities.
2. Understand how these vulnerabilities interact with the situation to create harm or risk of harm.
3. Assess the level of harm or risk of harm.
4. Take appropriate and proportionate action if required, involving partners where they have the relevant skills and resources.
Traditional automation can be used to support the THRIVE model, such as keyword and phrase detection, and to highlight predefined flags or specific fields. However, the THRIVE model requires a thorough understanding of all available information. This requires context, and again, that is where AI agents excel. This might include analysing the wording used in the call/message, the tone captured in the transcription, previous interaction history from the telephone number or email address. Small indicators of vulnerability may appear insignificant in isolation, but together they may change the risk picture and approach necessary entirely.
This approach can be helpful in support of mental health issues (which may or may not require police involvement), contact from children, violence against women and girls and other domestic violence-related offences and Right Care Right Person health and social care-related interactions.
AI agents are designed to recognise how pieces of information relate to each other, allowing them to build a balanced picture of a situation and complete a THRIVE assessment, all in approximately three seconds. This frees the operator, who only needs a minute or two to review, validate and refine the assessment and grade the interaction, using their professional judgment.
In doing this heavy lifting, AI agents enable operators to use their training, experience and judgment to make better, faster, more informed decisions. AI agents complement process automation, empowering operators to use their judgment. The technology reduces friction, helping the right information appear at the right moment, so better decisions can be made under pressure.
With AI agents, police force control rooms are moving beyond software that simply records incidents and toward systems that actively support the people protecting the public.
Sam Sheppard is Pre-Sales Engineer at Octave.


