AI Governance Failures Expose Organizations to Professional Liability Risks

Recent incidents in Australia highlight how poor oversight of AI tools can lead to costly errors and privacy violations, Lockton exec says.
By: | October 22, 2025
A robotic hand adjusts a set of blocks that spell out risk to indicate a change from no to yes

Two recent incidents demonstrate that the primary risk from artificial intelligence stems not from the technology itself, but from inadequate governance and quality assurance processes around AI-assisted work, according to a commentary from Lockton.

The incidents reveal a pattern of organizational failures in managing AI tools effectively, according to[color=rgb(30, 30, 30)] Mark Luckin, national manager of Cyber & Technology for Lockton Australia.

In the first case, a consulting firm produced a report using Azure OpenAI that contained non-existent references and fabricated court quotes, leading to corrections and a partial refund to the client.

The second incident saw a New South Wales government department contractor upload a spreadsheet containing thousands of rows of sensitive flood victim data directly into ChatGPT, creating a significant privacy breach.

These cases underscore how organizations across the tech and consultancy sectors are rushing to adopt AI for efficiency gains without establishing proper safeguards, Luckin writes. The commentary identifies three critical risk areas emerging from such failures: – Uncontrolled data leakage through AI prompts.
Lack of oversight on where data resides when processed by external AI systems
The potential for inaccurate AI outputs to cause client losses and reputational damage.

Insurance and Risk Management Sectors Face Coverage Gaps

For the risk management and insurance industry, these incidents present immediate challenges in determining appropriate coverage, according to Luckin. Traditional cyber insurance policies may need explicit updates to cover AI-related data leakage, particularly when sensitive information is shared with AI vendors.

Professional indemnity insurance faces similar questions, as insurers must now assess whether existing policies adequately address claims arising from AI-generated errors in professional services.

The commentary noted that technology errors and omissions insurance becomes particularly important for firms developing or reselling AI solutions, as they face liability from defects or failures in AI systems. Some insurers are already developing specialized AI policies to address risks such as model failures, algorithmic bias, and errors from autonomous decision-making, suggesting a significant shift in how the industry approaches emerging technology risks.

Organizations Must Implement Comprehensive AI Governance

The path forward requires organizations to establish robust AI governance frameworks that go beyond simple usage policies.

“While AI offers great potential, it also brings new risks around data privacy, accuracy and liability,” Luckin wrote. “Organizations must manage these risks proactively with strong governance, updated insurance and careful oversight to protect themselves and maintain client trust in this changing landscape.”

According to Lockton, practical measures include developing a comprehensive QA checklist for AI-assisted deliverables that requires two-person verification for quotations and references, human review of all numerical claims, and documentation of prompts and drafts to evidence due diligence. On the contractual side, organizations should update agreements to specify when AI assistance is permitted, require disclosure of material AI use, and link fee adjustments to failed QA processes rather than broad prohibitions on AI use.

Read the full commentary here. &

The R&I Editorial Team can be reached at [email protected].

More from Risk & Insurance