Insurance Firms Ready to Adopt AI Despite Data Quality, Bias Challenges
Most insurance companies are either already utilizing AI for business decisions or plan to do so within the year, despite facing significant challenges related to cost, data quality, and bias, according to a recent survey by the Ethical AI in Insurance Consortium (EAIC).
The survey results, which highlight an industry on the cusp of a technological revolution, found that 80% of insurers are already or plan to adopt AI. Of those, 14% of companies are currently leveraging AI in operational decisions, while the remaining two-thirds intend to implement the technology this year, the survey found.
EAIC is a collaborative effort dedicated to promoting responsible and ethical adoption of AI in the insurance industry. The group, which currently has 17 members, consists of insurers, insurtechs, influencers, and other stakeholders to establish industry-wide standards, foster transparency, and ensure fair and accountable use of AI technologies.
“The implementation of AI in our organization has transformed the way we approach claims,” said Douglas Benalan, CIO of CURE Insurance and an EAIC member. “We’re already observing substantial gains in operational efficiency and accuracy. However, the journey is not without its ethical challenges, making the need for industry-wide collaboration and proper frameworks paramount.”
The survey identified IT (69%), sales (57%), and marketing (51%) as the leading insurance departments in current AI adoption. It also reported significant improvements in operational efficiency (57%), accuracy (37%), and revenue (37%) due to AI usage.
However, 69% of respondents expressed dissatisfaction with current approaches to report and address AI model biases and inaccuracies.
“AI in insurance relies on high quality and comprehensive choice of inputs into the models,” stated Abby Hosseini, Chief Digital Officer at Exavalu. “While the benefits of leveraging vast amount of data to enhance decisions is undeniable, the ramifications of using poor quality data and the prevalence of biased and selective input into the insurance AI models cannot be overlooked,” he said.
According to the survey, 97% of the companies that are already using AI have encountered challenges related to bias. The survey identified urgency in educating employees on the risk of AI biases, with 57% of respondents stressing its importance.
It also highlighted the need for regulatory guidance, with only 23% satisfied with the support provided by regulatory bodies. Most companies advocated for regular AI audits and training employees in AI legislation and ethical dimensions.
“As insurers navigate the complex landscape of responsible AI implementation, it’s clear that regulatory guidance is paramount,” said Paige Waters, Partner at Locke Lord. “Some states are starting to lead the charge by issuing AI regulations and guidance, but it is clear that insurance regulators also are relying on existing laws to regulate AI practices while attempting to balance innovation with responsible AI use.”
For more information about the EAIC’s code of ethics and survey visit the organization’s website. &