Four Ethical Pillars for Responsible Data Management in Insurance

With so much data available to insurers, collecting and using it responsibly is an ever more daunting task.
By: | May 15, 2024

As the saying goes, with great power comes great responsibility. That certainly is the case when it comes to data management in the insurance industry. Carriers have long relied on data to assess and price risk accurately, and modern technology has made the ability to collect, store and analyze data seemingly unlimited. Some estimates project that by 2025, the global volume of data will reach 175 zettabytes. That’s equivalent to nearly three trillion 64 GB iPhones.

But the risks associated with managing such a huge quantity of data are significant. Through the entire life cycle of data, from its generation and collection to storage and use, carriers and brokers have a responsibility to ensure accuracy, security and fairness in their data management practices. This is especially difficult in the face of constantly evolving AI technologies that lack regulation and whose potential is not well understood.

The need to carve out best practices while navigating this new frontier has given way to the field of data ethics, the principles of which can guide the responsible use of data in lieu of codified regulations.

“It’s the mission of data ethics to maintain and increase the trust of customers in data processing at Allianz, so it’s in our very own interest to handle data safely and securely,” said Philipp Raether, group chief privacy officer at Allianz.

“We achieve this by complying with applicable law, data privacy regulations, GDPR and others. All this extends into the space of AI, but responsible AI goes beyond existing laws. At Allianz, we’ve been using our five principles of responsible AI to ensure transparency, privacy, human agency and control, non-discrimination/fairness and accountability,” Raether continued.

Other insurers have similar doctrines governing their data practices, which share a few common traits. These are four of the key tenets of data ethics in today’s connected, AI-enabled world.

Transparent Collection

Insurers pull data from a variety of sources. Much comes from public databases, but clients also provide a wealth of information that is specific to their business and usually confidential. In the case of proprietary or personal data, insurers are obligated to ensure that the information was gathered and shared with consent.

An example from the personal insurance world demonstrates the risks of poor transparency.

Some automakers recently came under fire for collecting data from internet-connected vehicles and sharing it with third parties — either data brokers or insurers themselves. That data, including details around driving behaviors like speeding, sharp accelerations or hard braking, was then used to craft personalized insurance policies, resulting in higher rates for many customers. Most of those customers were unaware that they had agreed to share that data by signing agreements without reading the fine print.

Portrait of Philipp Raether

Philipp Raether, group chief privacy officer, Allianz

Discovery of this stealthy data collection attracted the attention of politicians and regulators, sparking investigations into automakers’ practices and whether they violated the Federal Trade Commission Act.

While insurers have not been scrutinized for their use of this data, ethical data management nonetheless calls for confirming that information was gathered responsibly and transparently.

“Whenever data is shared with Allianz, we make sure to adhere to the law. If you acquire data, we make sure that the data is either non-personal data or that the provider had the consent to share this data,” Raether said.

Ensuring Integrity

Maintaining data integrity means en­suring the accuracy, reliability and com­pleteness of a data set from collection through transmission and storage. Inaccuracies can slip in easily, either through human error, a system glitch or deliberate falsification.

Generative AI can create a fake data set in a matter of minutes.

Insurance companies need controls in place to verify data and protect its authenticity. AI platforms used for data intake and processing can be helpful in detecting possible errors by flagging outliers, which are then reviewed by experts.

“kWh Analytics fact-checks the data to ensure there are no outliers and that the data makes sense,” said kWh Analytics CEO Jason Kaminsky. “Output from the models is only as good as the data that goes into them. Garbage in equals garbage out. Make sure data is high-quality when it’s fed to these models. It’s hard to get the genie back in the bottle, so you have to be strict on your data quality from the beginning. Do not let your standards slip.”

Access controls also protect data integrity after this initial processing by restricting opportunities to modify data once it is in storage.

“We have technical data protection, like encryption of data at rest and data in transit or data access management, and organizational data protection, like a strict need-to-know principle, meaning that employees only have access to the data they need to fulfill their roles. We educate our employees regularly with mandatory training in privacy, data protection and the broader IT context,” Raether said.

Protecting Privacy and Security

Keeping data safe is a critical yet com­plicated component of data ethics. The more data that exists, the more opportunities exist for a breach and the more resources it takes to protect. A data breach would incur regulatory investigations and fines, lawsuits and lasting reputational damage for insurers. Ensuring data security means going beyond compliance by building layers of protection and evaluating partnerships with data vendors carefully.

“Liability is closely linked to com­pliance with applicable privacy and information security laws. Both data analytics vendors and the insurance industry must comply with these requirements. With GDPR, however, when insurers are using third parties for data processing, the liability of compliance remains mainly with the insurer externally,” Raether said.

“In the light of this, Allianz makes sure to have adequate contractual relationships with vendors. We also check the compliance of vendors. It has occurred that a vendor was not chosen because we were not convinced that they fully complied with our privacy and data protection standards.”

Kaminsky added, “Other best practices include encryption, access controls, network monitoring, incident response plans and employee training. Organizations should assess whether existing IT infrastructure and expertise is sufficient for the volume and sensitivity of data and invest in upgrades if needed. Having verifiable backups is critical — and making sure that IT infrastructure is covered in case of disaster.”

Unbiased Data Interpretation and Application

Machine learning models have the potential to eliminate hours of manual labor in assessing and pricing risk. But the same ability to learn that makes these models so innovative is precisely what makes them dangerous. Insurers utilizing these models need to regularly investigate how they are incorporating and adapting to new data. Algorithms trained on existing data sets and programmed with parameters designed by humans can easily have bias built in, resulting in potentially discriminatory outcomes.

Portrait of Jason Kaminsky

Jason Kaminsky, CEO, kWh Analytics

Kaminsky said that “the potential for bias to be baked into AI/ML models in opaque ways” is one of the risks that keep him up at night. Insurers need insight into how models are interpreting data before relying on their output for any decision-making.

“Avoiding overreliance on black-box systems will be critical. We need ongoing research and dialogue on these issues,” he said.

To that end, ethical use of data will always require oversight. Diligent auditing of algorithms and the data fed into them will be necessary to catch and correct bias.

“The risks of AI were quite prominent from the start, and given that it is not a mature technology, the risks and challenges are well understood already. Especially with generative AI, the risks are transparency, accuracy and fairness/non-discrimination,” Raether said.

“At Allianz, we tackle these risks with a human-centric approach to AI: A human is always involved in case of risks stemming from AI.” &

Katie Dwyer is a freelance editor and writer based out of Philadelphia. She can be reached at [email protected].

More from Risk & Insurance