Generative AI Advances to Reshape the Cyber Risk Landscape

The rise of generative AI and Large Language Models (LLMs) heralds a new era in cyber risk, potentially enabling more effective and widespread cyber-attacks, warns a recent report from Lloyd's.
By: | March 19, 2024

The emergence and advancement of Generative AI (GenAl) and Large Language Models (LLMs) are poised to significantly reshape the cyber risk landscape, reports Lloyd’s.

These technologies, which have seen substantial progress in the last six years, are becoming increasingly effective in tasks relevant to cybersecurity. Despite initial barriers such as AI model governance, cost, and hardware constraints, recent breakthroughs in algorithmic efficiency and the release of unrestricted frontier models mark a crucial shift in AI governance.

The implications for cyber risk are profound. The automation of vulnerability discovery, particularly in areas that evade human experts, is expected to expand the options available to threat actors. Furthermore, the automation of target discovery could lead to more cost-effective, precise, and extensive cyber-campaign targeting and scoping. This suggests that threat actors may be able to generate custom attack materials for a wider range of potential targets.

AI is also anticipated to increase the effectiveness of skilled actor and reduce barriers to entry for cybercriminals, according to the report. While the risk of cyber catastrophes may see a modest increase, small scale events are likely to rise at a faster pace as AI enhancements enable threat actors to design effective targeted and lower-profile campaigns, according to the report.

The advent of powerful generative AI models presents significant opportunities for innovation, but also introduces considerable risks of misuse and harm. To date, the AI industry’s focus on safety and economic considerations of training LLMs have prevented widespread misuse. However, recent developments have reduced these costs, making the technology more accessible to threat actors.

Until September 2023, access to “frontier-grade” generative models was limited to large labs such as OpenAI, Meta, Anthropic, and Google. Access to these labs was subject to their strict governance, oversight, and safeguards. However, it is now possible to run an LLM with capabilities equivalent to GPT3.5 on consumer-grade hardware like a MacBook M2, proving it’s possible to bypass all previous safeguards, according to Lloyd’s.

This development ushers in an era of cyber threat proliferation, where powerful, specialized-purpose models can be easily created, distributed, and run on commodity hardware for cybercrime purposes. While it will take time to fully understand and industrialize the capabilities for illegal purposes, it is clear that threat actors are already exploring these possibilities, the report stated.

The report concludes that while a sharp escalation in cyber risk is unlikely without significant improvements in AI effectiveness, the frequency of manageable cyber catastrophes is likely to moderately increase. This gradual increase in risk is expected due to the steady but incremental progress in AI capabilities, highlighting the potential need for increased regulatory focus.

To access the full report, click here. &

The R&I Editorial Team can be reached at [email protected].

More from Risk & Insurance