From Data Silos to Shared Insights: Transforming AI Risk Management
As artificial intelligence (AI) transforms industries, it brings both remarkable opportunities and significant risks—algorithmic bias, systemic failures in automated decision-making, data breaches, and deepfake fraud, to name a few. And some risks remain off our radar, waiting to emerge.
There are tremendous unknowns with AI. We’re learning as we go. But imagine if we got together and compared notes. Shared the data that we keep close. Collaborated more.
Why AI Risk Management Needs a Collaborative Approach
To manage AI risks effectively, that’s what needs to happen. Insurers, technology companies, and businesses need to change how they work together, enabling us to collectively understand and mitigate the tremendous unknowns of AI, ultimately enhancing innovation while effectively managing risks.
Unlike traditional risks, AI-related risks often arise from proprietary algorithms trained on massive and sometimes undisclosed data sets. This creates a challenge for insurers in underwriting and pricing risks they don’t fully understand. Additionally, organizations using AI face scrutiny from regulators, investors, and customers, who are cautious about potential liabilities and operational disruptions stemming from AI failures.
In this landscape, no single player can address AI risks alone. Data sharing among insurers, tech companies, and businesses is essential for informed underwriting, proactive risk mitigation, and driving innovation.
Enhancing Risk Visibility
AI systems are only as good as the data they process and understanding how these systems behave in the real world requires access to a diverse range of performance metrics, incident reports, and near-miss data.
Consider this hypothetical scenario. A health-tech company uses AI to assist in diagnostic imaging. Over time, the accuracy of the AI model begins to decline — a problem known as model drift. Model drift refers to the phenomenon where the performance of a predictive model degrades over time due to changes in the underlying data distribution. The client doesn’t detect the drift until incorrect diagnostic results trigger a malpractice claim.
Could such a claim be prevented if the company shares de-identified diagnostic and performance data with their insurer? The insurer, who has aggregated similar data from other healthcare clients, detects a broader trend of model drift tied to recent changes in imaging technology. Working with a tech company, they co-develop a tool to monitor AI performance in real-time and flag potential drift early. As a result, clients reduce liability exposure, tech firms improve model stability, and insurers gain tools to assess and mitigate evolving AI risks proactively. Think about it.
Coordinated Defense
Let’s look at another possibility. A global financial services firm is nearly defrauded by a convincing AI-generated voice deepfake impersonating a senior executive. The incident is isolated — until similar attempts are reported by others. How might sharing data help?
Let’s say the company reports the deepfake attempt and shares forensic data with their cyber insurer. The insurer, who insures multiple firms in the sector, sees a pattern and engages a cybersecurity tech partner. Together, they create an early-warning system that uses shared incident data and voiceprint signatures to alert other insured clients of similar threats.
Clients are better prepared to detect and stop emerging AI-based fraud. Insurers reduce exposure to large cyber claims. Tech firms build more responsive security tools using real-world attack data.
Sharing insights can significantly enhance understanding of potential risks, leading to better insurance coverage and risk management solutions.
A New Way of Working
AI risks are dynamic, evolving with model retraining, new data, and software updates. Traditional annual underwriting may not suffice for managing these risks. How will we have to change? Could secure data-sharing frameworks allow insurers to monitor model behavior and system changes, enabling timely adjustments to coverage terms that would aligns risk exposure with insurance protection more effectively.
Could insurers incentivize responsible AI development by offering better terms and conditions to coverage or enhanced limits to companies that adhere to best practices in governance, testing, and transparency?
In turn, might we encourage the creation of safer AI systems and help establish market norms for risk-resilient AI deployment. It’s all possible, if we work closely together.
Overcoming Data Sharing Hesitancy
Despite its benefits, data sharing faces several hurdles—privacy concerns, competitive sensitivities, and legal restrictions among them. There are ways we can address those challenge to meet these new AI challenges:
- Establish clear data governance: Developing industry-wide standards for what data can be shared, how it’s anonymized, and who has access is essential. Frameworks like the EU’s AI Act or the NIST AI Risk Management Framework offer starting points.
- Build trust through transparency: Insurers and tech companies should clearly communicate how shared data will be used, how it’s protected, and what the benefits are for all parties. Pilot programs, sandbox environments, and data trusts, a legal and organizational framework that allows data to be shared and managed in a way that protects the interests of those involved, can help demonstrate value and build confidence.
- Use enabling technologies: Secure multiparty computation, federated learning, and synthetic data generation are new technologies that let groups share data without revealing private information. These tools could play a key role in how we share risks as artificial intelligence becomes more common.
A Data-driven Future Requires Collective Action
AI offers transformational benefits—but only if its risks are well understood and managed. The future of AI risk management relies on collaboration among insurers, technology companies, and clients, with secure and purposeful data sharing at its core.
Insurers should become proactive partners in AI governance by developing specific policies for AI risks, using loss data to improve model safety, and sharing risk insights with regulators. Meanwhile, technology companies need to integrate risk management into their AI development processes by offering transparency reports, collaborating with insurers on audits, and helping clients adopt responsible AI practices.
We won’t get here overnight. It will require trust, transparency, and shared standards. But the payoff is clear: a safer, more innovative digital economy where the promise of AI can be fully realized without compromising trust or stability.
By working together and leveraging data wisely, we can ensure that AI is not just smart—but also safe. &