McGill and Partners’ Nirali Shah on D&O Trends

How do you talk to your stakeholders, your investors, your employees, about artificial intelligence and what you are doing?
By: | March 13, 2025

In early March, Dan Reynolds, the editor in chief of Risk & Insurance, caught up with Nirali Shah, head of U.S. D&O with McGill and Partners. What follows is a transcript of that discussion, edited for length and clarity.

Risk & Insurance: Thanks for meeting with us Nirali. When you think about AI and the exposures it might create for officers and directors. What crosses your mind?

Nirali Shah: It’s been a really fascinating time for the D&O world.   AI is one of the things that we are talking about quite frequently. In terms of exposures for boards, a lot of what we are talking to clients about is how they monitor the use of AI, how they monitor AI as a risk to their business, whether it’s competitive risk, and whether it’s embedded risk from a tech standpoint.

And then for D&O, an important topic is disclosure. How do you talk to your stakeholders, your investors, your employees,  about artificial intelligence and what you are doing? How you are using it, how it’s good for you, how it’s not good for you, how it may or may not be good for your industry.

In addition to disclosure, companies are also considering how closely they have to monitor their use of it. One of the things that we talk about with clients is, do you understand how technology, whether it’s ChatGPT or some other variety of AI, may already be in use with the vendors that you work with. Are your employees using it and you’re not aware of that? Do you have parameters and policies around the use of that technology for your employees?

Are you building anything internally that is a closed system AI that you can start to really ramp up?

Clients obviously have to very closely monitor that. I think they’ve gotten much more aware of how embedded it can be with some of the vendors and suppliers that they work with. The next piece of that is, “Okay, well, then how do you disclose how that affects you from a risk standpoint?”  That’s really where we see the evolution of it at the moment.

R&I: Your vendors may be picking AI up at a rate that you’re not that well aware of.

NS:  It was surprising for some of our clients to learn that it was being used by vendors before people understood what AI even was. Or that it was so mainstream. I think people were not understanding that, for example, offerings from recruiters, may have embedded AI that you’re just not aware of. This could potentially mean that the company was inadvertently discriminating against certain types of candidates because of the way that AI technology was screening people. Does that, then, lead to discrimination suits or systemic issues?

If you’re not asking those questions, if you’re not diving in a little bit further, you might not be aware of what’s happening in the background.

R&I: What about the use of AI in claims? Is that something that’s crossing your mind, or are you seeing conversations around that?

NS: It’s different, I think, for financial lines. It’s harder to use AI, because of the way that financial lines claims typically come in, and how you have to dissect them against the policy. That’s not to say that AI isn’t being used in some stage of the claims review process, it is just harder to use.

R&I: I’ve heard pretty much the same thing from financial lines underwriters. They’re saying much of what you’re saying.

NS: I still think for financial lines coverages, it is a more difficult process to teach an AI model how to look at it because some of it is very subjective. Teaching a model to read the way that a human would and be able to extrapolate the same kind of information is going to be a bit more challenging in my opinion.

R&I: I appreciate that. What privacy exposures or worries does the use of AI create?

NS: It’s been a fascinating thing to talk about with our cyber leaders as well because, obviously, there’s some crossover there in terms of privacy concerns. I think there is obviously more opportunity for bad behavior to happen in terms of private information being potentially utilized or stored in the wrong way when you have an external model that is reading that data. One of the things we have talked about is open system AI versus a closed system.

When a company builds an AI product for internal use only and they can control what data goes into the model, but also where that data ultimately goes,  your privacy concerns become a little less robust since you can better control that data. It is not an open system where anybody and everybody now has access to that data.

Where we started to see some issues at the very, very early stages of things like ChatGPT was that companies were finding that employees were going online and just using it to write their internal memos or do a presentation. All this potentially confidential client data was being put into these generative systems that get to keep that data.

So, are you violating privacy laws? Are you at risk of having privacy concerns when that data is being floated into an open system where you don’t have any control of what happens to it?

Companies have gotten very smart very quickly around that issue. Most of the companies that we have spoken to and worked with have policies around the use of ChatGPT or other open- source generative AI systems. If they still allow the access through their IT platforms, there are restrictions around what kind of information you can plug in.

But certainly, there are privacy concerns around its use.

R&I: You may have already touched on it. But what would you say are best practices for disclosing the use of AI?

NS: It’s complex, and I think it will vary company to company, industry to industry, and frankly, geography to geography. There are rules in certain jurisdictions outside of the U.S. that are attempting to regulate the use and disclosure of AI. The U.S. hasn’t quite gotten there yet.

There are rumors of things in the works, and there are states like California which are very active in regulation/disclosure.  Generally speaking, there is going to be a risk of under disclosing and over disclosing. What we encourage clients to do is understand what their risks are and understand the parameters with which they need to disclose that to investors and other stakeholders.

Also, make sure that those disclosures appropriately align according to the environments that you’re operating in. That’s not always an easy answer. We’ve seen a wave of what is being dubbed as AI-washing claims where people are maybe touting their use of AI or their investment in AI more than is factual.

They’re getting caught on that and that’s going to lead to some issues. We give clients examples of what might happen. We don’t have the perfect formula. It’s really going to be client-dependent. It’s going to be very specific to the geographies they operate in and the industry that they’re in.

But just understanding the parameter of what could go wrong if you’re making disclosures and how do you want to  operate within that framework is important.

R&I: What energizes you about the work that you do? What do you like about this kind of work?

NS:   There’s a lot that I like. One of the really fascinating things for me is how dynamic financial lines insurance can be. We evolve sometimes as quickly as the world is evolving. The tie that we have to current events and how closely we have to understand the business world, the social world, the legal world of everything that companies are contending with is just fascinating to me.

Another thing is working with clients and being their sounding board for some of what’s going on and how it impacts them from an insurance standpoint. One of the reasons I became a broker is that I spent the first ten years of my career on the underwriting side, and the client interaction was always something that drew me in.

I’m also a self-professed policy nerd. So if you give me an insurance policy, I will read probably every word and pick it apart. It’s just something that really fascinates me. I think it’s an interesting, exercise, especially when you start talking about if you change that one word in this one sentence, all of a sudden, the coverage works differently. &

Dan Reynolds is editor-in-chief of Risk & Insurance. He can be reached at [email protected].

More from Risk & Insurance