Insurance Carriers: The Information Tech Risk You Underwrite Could Very Well Be Your Own

By: | March 10, 2020

David Trepp leads BPM's Information Security Assessment Services practice, which offers Comprehensive Penetration Testing services that identify vulnerabilities in your IT infrastructure, allowing organizations to make well-educated decisions on where to best allocate their resources. David can reached at [email protected].

Providing an adequate level of IT assurance for insurance industry leaders has become a high-profile challenge, as agencies and underwriters of all sizes find themselves on the front lines of the global ransomware war.

To compound this problem, the insurance industry is faced with increasingly distributed IT systems as cloud infrastructure, hosted applications and outsourced services blur the boundaries between your company and the rest of the world.

So let’s define the issues and discuss actionable steps all insurance industry leaders can oversee to provide effective IT assurance activities that will help keep your company from becoming the next victim.

As we enter the new decade, our information systems are more interconnected than they’ve ever been. And this trend will not only continue, but accelerate.

A typical organization computer system is interconnected with numerous integrated entities, including: service vendors, cloud vendors, underwriters, agents, and your own internal shadow IT.

And it doesn’t stop there, those vendors and their applications are also integrated with other clients, service providers, etc., the extent of interconnectedness is almost endless.

Providing real assurance across the boundaries of myriad interconnected systems is a daunting challenge. And the terrifying threat landscape we face puts business and personal interests, if not our very democratic institutions, at stake.

The question then, is how does an insurance business gain assurance about cyber security when the interconnections are myriad and the boundaries are blurred? Before we start exploring answers to this question, let’s review a typical real-world example of how breaches occur across system boundaries.

For our history lesson, let’s examine the infamous Target Breach that occurred in 2013. This well-studied breach clearly indicates that it all started with a third-party heating/air conditioning maintenance contractor.

This small vendor did not have the sophisticated security controls of a major retailer like Target. But, as a Target vendor, they did have access to Target’s vendor portal.

After hackers breached the heating/air conditioning vendor’s network (via a phishing email), they then had authenticated access to Target’s computer systems.

Throw in a missing patch on Target’s vendor portal web server, and the thieves proceeded to gain administrative control of the network and steal information on 40 million credit cards.

Similar scenarios have played out over and over again in the ensuing seven years and, as already noted, the number of integrated entities is only growing.

In modern integrated computer systems, assurance activities such as assessments, tests and audits are increasingly difficult to perform in multiparty environments. But there are a few key strategies and tactics that can help.

At the strategic level, the first thing all organizations should do is rigorously inventory data assets and data stores. Know where your data is stored and transmitted. Define who has data ownership responsibility and who is responsible for securing the data.

Only by knowing exactly what data your organization handles, and how you transmit and store it, can you begin to provide assurance about its confidentiality, integrity and availability (CIA).

Next, make sure you have strong contractual arrangements with your key vendors, service providers, and partners. Whether your relationship is with a cloud-application vendor or a key underwriter, strong interconnect security agreements provide the foundation for assurance across boundaries.

From data classifications and rules of behavior to incident reporting, a strong agreement defines responsibilities and reduces ambiguity.

For clear guidance on what’s essential to include in interconnect agreements, see the National Institute of Standards and Technology Special Publication 800-47 Security Guide for Interconnecting Information Technology Systems.

Ultimately, at the tactical level, we must, in the words of Ronald Reagan, “trust but verify.” Only penetration testing of controls can validate weaknesses in our IT defenses.

And testing must be comprehensive, across not only technical controls, but also human and physical safeguards. As outlined in the Target breach example, a simple human attack, in the form of a phishing email, can lead to a massive breach.

Testing technical controls alone will not answer the question “how can hackers successfully attack my IT systems?” Comprehensive testing will reveal cascading sequences of exploits that may lead from zero to control of your system in a way vulnerability scans or documentation reviews can never achieve.

Including key third parties in your testing regimen is not always easy, but there are some compelling arguments you can make. Reminding key integrated entities that we’re all in this together and we may well share vulnerabilities. Attempt to convince them that only by testing the entire interconnected ecosystem can we find weaknesses.

Reassure them the object is NOT to make them look bad. If all else fails and key third parties refuse to provide permission, demand to see evidence of their own due care and due diligence, e.g. a cover letter from their penetration testing vendor.

The good news is big players, like Microsoft and Amazon, provide almost blanket permission for third-party testing of their cloud infrastructures and applications. Getting all cloud/outsourced/interconnected vendors to participate remains a challenge though.

When discussing IT assurance across boundaries, we are often asked about cyber liability insurance and related products. In our current multiparty liability landscape, the devil is in the policy details. Whether selling or buying, all parties should make themselves well-aware of policy terms.

An easy example is defining exactly what systems, applications and data sets are covered by the policy, e.g. are industrial controls systems covered? How about mobile devices? Or cloud applications and systems?

The web of interconnected systems makes it difficult for underwriters to cover everything and difficult to afford for policy holders, if the policy is all-inclusive.

It’s also important to know about exceptions in coverage based on inadequate due diligence. For example, what if the policy holder is not adequately performing configuration management (missing patches) or adequate vendor management (including vendors in tests or collecting evidence of their due diligence activities)?

All parties must also agree on what, exactly, constitutes a covered breach. Is a physical theft of information assets a covered act?  How about coverage of a state-sponsored attack?  Is an inadvertent disclosure of PII covered?  The list goes on.

Providing real assurance across the boundaries of myriad interconnected systems is a daunting challenge. And the terrifying threat landscape we face puts business and personal interests, if not our very democratic institutions, at stake.

The fact that the security of any single entity affects the security of the entire integrated ecosystem demands that we act in concert to defend our interests.

The least we can do is attempt to get our key third parties to participate in comprehensive testing. Even if they refuse, the result of our conversation with them will be increased awareness, better communication and, hopefully, a call to action. &

More from Risk & Insurance