Global ransomware threat expected to rise with AI, NCSC warns

Artificial intelligence (AI) is expected to increase the global ransomware threat over the next two years, cyber chiefs have warned in a new report published on Wednesday (January 24).

Jan 25, 2024
By Paul Jacques

The near-term impact of AI on the cyber threat assessment, published by the National Cyber Security Centre (NCSC), a part of GCHQ, concludes that AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyber attacks – including ransomware – in the near term.

Among other conclusions, the report suggests that by lowering the barrier of entry to novice cyber criminals, hackers-for-hire and hacktivists, AI enables relatively unskilled threat actors to carry out more effective access and information-gathering operations. This enhanced access, combined with the improved targeting of victims afforded by AI, will contribute to the global ransomware threat in the next two years.

Ransomware continues to be the most acute cyber threat facing UK organisations and businesses, with cyber criminals adapting their business models to gain efficiencies and maximise profits, says the NCSC.

To tackle this enhanced threat, the Government has invested £2.6 billion under its Cyber Security Strategy to improve the UK’s resilience, with the NCSC and private industry already adopting AI’s use in enhancing cyber security resilience through improved threat detection and security-by-design.

NCSC chief executive officer Lindy Cameron said: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.

“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.

“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks.”

Analysis from the National Crime Agency (NCA) suggests that cyber criminals have already started to develop criminal Generative AI (GenAI) and to offer ‘GenAI-as-a-service’, making improved capability available to anyone willing to pay.

Yet, as the NCSC’s new report makes clear, the effectiveness of GenAI models will be constrained by both the quantity and quality of data on which they are trained.

The growing commoditisation of AI-enabled capability mirrors warnings from a report jointly published by the two agencies in September 2023, which described the professionalising of the ransomware ecosystem and a shift towards the “ransomware-as-a-service” model.

According to the NCA, it is unlikely that in 2024 another method of cybercrime will replace ransomware due to the financial rewards and its established business model.

James Babbage, Director General for Threats at the NCA, said: “Ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals.

“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods. Fraud and child sexual abuse are also particularly likely to be affected.

“The NCA will continue to protect the public and reduce the serious crime threat to the UK, including by targeting criminal use of GenAI and ensuring we adopt the technology ourselves where safe and effective.”

Most ransomware incidents typically result from cyber criminals exploiting ‘poor cyber hygiene’, rather than sophisticated attack techniques, the NCSC says.

The Bletchley Declaration, agreed at the UK-hosted AI Safety Summit at Bletchley Park in November, announced a first-of-its-kind global effort to manage the risks of frontier AI and ensure its safe and responsible development.

In the UK, the AI sector already employs 50,000 people and contributes £3.7 billion to the economy, with the Government dedicated to ensuring the national economy and jobs market evolve with technology as set out under the Prime Minister’s five priorities.

Jason Nurse, CybSafe’s director of Science and Research and Reader in Cyber Security at the University of Kent, daid: “The NCSC report highlights how AI will exacerbate the cyber threat landscape in the UK. Threats are rising, and organisation’s defences are not yet up to the level needed to match the rising threat.

“Over the last year, the threat landscape has undeniably shifted. AI is shifting the goalposts, and various worldwide conflicts have contributed to more aggressive cyber activity. Attacks are also increasingly destructive, as attacks on the NHS display.

“The NCSC’s message is unambiguous – collective action is needed. Intelligence-sharing partnerships, cross-sector cooperation and a security-first culture focused on people not blame are vital to strengthening national cyber defences.

“Technical controls on their own, while crucial, are no longer sufficient. Securing the human factor must now also be considered a priority.”

Related News

Select Vacancies

Copyright © 2024 Police Professional