Cyber and Professional Liability Considerations to Take Before Incorporating Generative AI into Your Business

With huge potential to change businesses as we know them, generative AI promises to cut costs and give companies a competitive edge. But implementation requires a detailed review of AI’s equally powerful risks.
By: | January 9, 2025

Generative AI has tremendous potential to change how business is conducted. Emerging technologies will undoubtedly create more efficient operations. Cost savings, advantage and an increase in market share are just some of the promises gen AI touts.

But in order to adopt such nascent technology, organizations need to be on top of its inevitable growing pains and inherent risks.

“When operating as a professional service firm, your customers look to you for guidance and rely on your expertise. This expectation is no different whether that service is provided by a human being, a human being supported by AI, or AI itself,” said Sean Clifford, vice president, financial institutions cyber lead and strategic national accounts, professional liability, BHSI.

“Any errors, inaccuracies or negligence in the rendering or performance of these professional services could ultimately result in liability,” Clifford said.

In order to adopt generative AI into the business strategy, professional service providers should review their potential risks to get ahead of these challenges.

The Professional Liability Risks of Generative AI

AI is a very powerful tool no matter the service it helps to provide. AI can augment and support employees, but it also has the potential to operate autonomously when delivering services to customers.

Perhaps one of the biggest professional liability risks a business could face is when an AI system makes an error or provides a biased decision that results in harm to individuals or to the business.

“For example, Clifford said, “an AI-powered legal tool could assist in researching case law, but the information cited by the tool could be inaccurate or completely fabricated leading to potential malpractice claims.

“Likewise, in a different industry like finance, unintentional algorithmic bias within a model could lead to loan denials based on race or gender, drawing both legal and regulatory scrutiny.”

No matter the industry, insurers are keen to carefully assess these risks and to better understand the unique exposures posed by AI.

“What’s also important is to mitigate the overreliance on this technology by having a human in the loop, reviewing and validating the content and services provided prior to them being delivered to the customer,” Clifford said.

AI Risk and Cyber

Cyber is yet another risk area that gen AI adopters must be reviewing.

“The adoption of any new technology, including generative AI, can potentially create new attack surfaces that threat actors can exploit,” Clifford said. “While businesses may be eager to embrace this exciting technology, it’s imperative for organizations to appreciate the new risks it presents.”

With generative AI comes an entirely new attack surface for malicious threat actors. But it’s more than that; it’s a whole new attack surface with very little history to extrapolate data from. The types of attacks are still in their infancy, though some are starting to take shape.

“Securing generative AI is not yet a fully matured process,” Clifford said. “For instance, a company’s publicly facing AI chatbot that allows user prompts could be manipulated by a savvy threat actor. By maliciously phrasing prompts, they might bypass the model’s guardrails and trick it into divulging confidential information.”

This type of attack is known as prompt injection, and its goal is not to outsmart the AI but rather the developers who might not have anticipated all possible permutations.

“There’s always the potential to find a way around,” Clifford said.

“The risk lies in the possibility that a seemingly innocuous request, phrased in a specific manner, could lead to the unintended disclosure of protected data.,” Clifford added.

Developers should be working several steps ahead to implement effective blocks and guardrails to prevent sensitive information leaks brought on by malicious actors.

Additionally, cybercriminals can leverage AI for reconnaissance; targeting network assets; improving the scope, complexity and persuasiveness of phishing campaigns; and even reverse-engineering code to identify and exploit vulnerabilities.

Essentially, malicious actors have a tireless, virtual support system that continuously grows more efficient and dangerous over time.

Innovation and Data

One important risk to note is the possibility of consumer data leaks. Companies are keen to gather as much information as possible to feed into their large language models, but this race-to-the-finish line could leave them vulnerable to more risk.

“It’s akin to when fishermen cast their nets — other things are bound to get caught up in the process,” Clifford said.

“As AI models consume vast amounts of data, sensitive consumer information could inadvertently be swept up, raising significant privacy concerns,” he further explained. “Taking it a step further, recent regulations give the consumers the right to have their data ‘forgotten’ by an organization. What happens when a consumer requests that their data be forgotten by a large language model that may not be able to be untrained on that information?”

Organizations must strike a delicate balance between the need for data to drive AI innovation with the imperative to protect consumer privacy and comply with evolving regulations.

“Navigating this intersection will be a critical priority for organizations in the coming years,” Clifford said.

Underwriter Considerations and Smart Steps to Bring in Gen AI

The rapid development and adoption of AI has created something of a paradigm shift. Organizations cannot wait on the sidelines while AI grows exponentially, nor can they blindly rush into adoption without considering the risks.

“There is no silver bullet when it comes to assessing the use of artificial intelligence, but it’s incumbent on underwriters to understand their customers’ businesses and work with them to understand where these new and evolving technologies will fit,” Clifford said.

Insureds want to be forward-thinking and look to emerging technologies to improve their business, but “we want to ensure they’re implementing it thoughtfully and cautiously,” Clifford said.

Clifford offered three steps businesses can take to introduce AI into the fold while remaining cognizant of how it could impact liability and cyber risk.

First, establish cross-functional committees to provide guidance and oversight to business units and implement frameworks and best practices surrounding the use of artificial intelligence.

“These groups should be involved in evaluating AI use cases and overseeing pilots prior to widespread deployment and adoption within the organization,” Clifford said.

Next, implement a requirement for human oversight where this technology is utilized, ensuring there will always be a human in the loop verifying outputs where AI is producing content and making decisions.

Lastly, “it’s crucial to stay informed on new developments in this technology. Understanding the changes in the technology and the legal and regulatory landscape can be done by collaborating with outside counsel, law enforcement, threat intelligence groups, and industry peers to identify trends and emerging risks,” Clifford said.

“I believe the companies that take a long-term view will be the most successful,” he said. “This technology is completely transformative, so it’s crucial to be mindful, thoughtful and pragmatic about its implementation.” &

Autumn Demberger is a freelance writer and can be reached at [email protected].

More from Risk & Insurance