Protecting teens online
Technology that spots risks early can help investigators extract actionable intelligence to protect youngsters online, as David Janson explains.
When teenagers are online, they are constantly at risk from criminals, sexual predators and extremist influencers. Largely unaware of all the dangers, many teenagers are determined to explore what is forbidden but subsequently end up as victims.
Last year, for example, the National Crime Agency reported that 90 per cent of sextortion cases involve teenage boys, while research published involving City of London Police found that 29 per cent of 13-21-year-olds have been victims of fraud, while 88 per cent had seen fraudulent content online in the previous 12 months.
The days of believing a small team of analysts or investigators can detect and counter these threats effectively are long gone: The complexity of the web and the volume of data on social media and messaging platforms where so much malign activity takes place are now beyond the analytical capabilities of human investigators.
Officers need technology to broaden and accelerate investigations
Unless, that is, they have help from AI-powered social media intelligence (SOCMINT), which extracts insights from the labyrinth of platforms to flag when teenagers are at risk.
One of the many complications for investigators is that those seeking to influence or target teenagers will switch platforms and channels across the surface, deep and dark web. To keep pace, analysts need secure access to as many current and emerging platforms as possible, so they can collect data in a way that is structured for further analysis.
On mainstream social media this presents challenges, given the number of different account possibilities. It becomes even more problematical on obscure platforms such as 4chan, 8chan/8kun, Discord, Telegram, WeChat or Weibo where identities are usually hidden, forums spring up and disappear – and encryption may be in use. 4Chan, for example, has millions of monthly users – all of whom are anonymous. Despite a waning in its popularity, it remains highly potent as a forum for poisonous content.
Specialist social media intelligence (SOCMINT) technology is key
SOCMINT shines a light into the dark recesses of these platforms. It uses AI and analytical techniques to streamline and automate discovery, detection and the mapping of entire digital footprints online for dangerous or suspicious groups and individuals. The most advanced solutions provide the capability to monitor many social media platforms at once, using AI-driven detectors to flag risky content. Investigators can then intensify the investigation if the risk indicators pick up a potential threat, such as sexual grooming activity.
Manual methods are inadequate for establishing perpetrators’ identities and analysing their interactions with potential victims.
With zettabytes of data on social media, it is almost impossible to find the required information for an investigation using manual methods. It is often also necessary to cross-reference the findings from social media, what is found from data from other sources such as police intelligence, public databases and legal records.
Doing all this at the scale required is simply impossible for human investigators tapping away at keyboards.
AI-powered automation is essential, with the proviso that any decisions about who or what to investigate and how far to pursue it must always ultimately be made by the human judgement of police officers.
Detecting levels of engagement and mapping connections
One of the advantages of SOCMINT is that it can establish levels of user engagement on a platform, providing another layer of insight unattainable through manual monitoring. Links between different accounts or platforms that are invisible to human analysts are surfaced by this ability to deploy analytics across large volumes of data.
Investigators can scan tens of thousands of entities using AI-powered risk detectors configured to their needs. Sexual groomers, for example, may have multiple identities on different platforms and only AI can detect the tell-tale markers that link them.
Keeping pace with bad actors online
The most advanced AI-enabled risk analytics in SOCMINT combine analysis of the text in posts with the data in images, videos and across the notorious online Com networks. The insights they provide feed into analysts’ workflows, accelerating detection by many factors. Users interested in detecting new risks do not need to know any data science to change what SOCMINT solutions search for.
The algorithms in the solutions are not baffled by changes in online jargon or memes. They also have the critical ability to pick up sentiment and emotion through near real-time analysis of language. When teams are dealing with sexual or extremist grooming, this is invaluable. Multi-lingual data analysis extends these capabilities to foreign-owned platforms.
Cutting to the chase
Investigators may fear that SOCMINT technology will overwhelm them with false alerts. Smart prioritisation combined with the power of configurable AI-enabled risk detection, however, ensures analysts only receive the insights relevant to their investigations and cuts down the need to weed out time-wasting lines of inquiry.
All of us know the problem of teenagers being targeted online or seduced into crime is unlikely to go away. Teenagers will always want to push boundaries into dangerous areas. But investigating officers need more advanced technology to detect the sexual predators, extremists and criminals who prey on them.
The simple fact is that the scale of the web means investigators cannot extract actionable intelligence from the online world without help from AI-driven technology. The dangers are multiplying and forces require better tools to counter them to protect our vulnerable teenagers online.
David Janson is VP EMEA, at Fivecast.




