Insuring an AI-Equipped World: Allianz’s Tresa Stephens Outlines Potential Underwriting Challenges at RIMS 2022
Artificial intelligence brings many benefits to the insurance industry — but we wouldn’t be risk management professionals if we didn’t consider the possible downsides, too.
The risks AI creates have to be managed and mitigated as our industry explores the benefits of innovative technology.
Tresa Stephens, head of cyber, tech & media – North America at Allianz Global Corporate & Specialty, presented her thoughts on the subject during her RIMS Innovation Hub session, titled “Artificial Intelligence: Implications for Cyber and Technology Errors and Omissions Insurance.”
Her session focused on the benefits of AI, like increasing efficiency and overall economic output of the business, while also recognizing the significant exposure and risk it presents to both businesses and consumers.
Stephens commented on the emergence of AI, saying, “It’s ushering in a whole new industrial revolution and it’s changing the way we live our lives. But as with any other emerging technology, there are risks we need to be cognizant of.”
An interesting focus area, the intersection of cyber and technology risks with technology errors and omissions coverage, brings new issues to the table.
One issue Stephens highlighted is changing consumer perception of AI and the kinds of risks it creates. The average layperson is exposed to AI through Hollywood fantasies of the Terminator and Minority Report-type. These tropes are designed to frighten people with extreme ideas of how technology can take over the world.
But the truth is much more mundane — we’re not at Hollywood-levels quite yet. AI is limited by human programming and the use cases we can dream of now.
But consumer perception remains critical for adoption, so the question becomes how do we change customers’ ideas of AI and how the technology can help businesses and our industry grow and evolve?
Humans and Machines Work Together
Stephens focused her comments on how AI and people can work together.
It’s not about AI replacing humans; rather, the technology works in tandem with people to complement the work employees perform.
Stephens said AI is just like any other type of technology you’ve ever used, whether at work or at home. It should be considered a tool, and people should expect it will have some limitations just like any other tool or resource.
“I think of AI as like a great team member — everyone on the team is good at different things,” she said. “AI is good at supporting us for what we can’t do as well or as quickly.”
When you consider AI through this lens of working in concert with your human employees, you can see how the technology could become a necessary resource.
Limitations of AI
Some AI limitations were discussed during the session.
Stephens pointed out common issues including bias that has been unknowingly built into AI. Another issue is that AI is still highly unregulated.
There are issues with evolving consumer privacy rights, as well.
And finally, AI could potentially change the landscape of liability. As vehicles become more autonomousvehicles become more autonomous and as technology evolves, insurance needs to change to meet the new risk.
There are no fully self-driving vehicles available to consumers in the U.S. today, but there is technology to allow vehicles to take over at parts during the drive. As innovations advance and regulation matches pace, insurers will need to underwrite these new types of products.
New questions arise, such as does auto insurance shift to product liability coverage and how do we manage the overlap of technology and the product itself?
The Future of AI
Stephens also spoke about the future of AI and what’s next with AI applications and the implications to underwriting.
She noted coverages tend to blend when it comes to autonomous vehicles, even though insurance, in general, is often very siloed.
For example, we may see product liability coverage expand to include BI coverage to meet the new needs of autonomous vehicle owners.
When considering how to manage the exposure from AI, Stephens recommended a human-centered approach.
Consider how the algorithms are affecting people. Be careful with how you review your data sets related to your use case. And test your results and audit them, both before and after you have gone live.
Stephens cautioned, “Where AI is making recommendations, be very clear about your accepted error ratio.”
With 30 thought leadership sessions over the three-day event, RIMS’ Innovation Hub was an exciting and thought-provoking room on the exhibition floor for conference-goers looking for inspiration. Twenty-minute presentations about compelling topics, like this one, were slotted back-to-back throughout the conference. Attendees could enter and exit as they pleased, helping create an informal discussion space for ideas to flourish.
Like many other speakers, Stephens was surrounded by attendees with questions and thoughts to continue the conversation following her talk. &