John Kain of Amazon Web Services on the Use of AI in Finance
As head of market development across financial services at Amazon Web Services (AWS), John Kain and his team are responsible for helping clients transform their processes using new technology – largely driven by AI. With a financial services background of his own, Kain understands what it means to operate in a heavily regulated environment. Former Risk & Insurance Associate Editor and current contributing writer Katie Dwyer talked to him about how insurers are utilizing AI while ensuring compliance and good governance as the technology evolves.
Risk & Insurance: Thanks for your time, John. What constitutes a good governance framework around AI?
John Kain: AI has changed a little bit in terms of traditional vs generative AI. Statistical processes and more traditional forms of machine learning aren’t new. The insurance industry as a whole has long understood what it means to build a well-governed process around using these systems. The first step is understanding the problem you’re trying to solve, the associated risks, and the level of risk you’re willing to take on. The second step is an evaluation of the data sets that are feeding the model. Are they accurate? Do I have confidence that the data I’m using will drive the right decisions? Am I using the right tool for the right job?
Third is monitoring the model. After you’ve trained the model, is it making predictions as you expected over time? Any significant deviations should warrant an audit. Those are the key components of what governs the machine learning process.
R&I: How have generative and agentic AI complicated approaches to governance?
JK: Generative AI adds a new twist. It’s inherently unpredictable. Its statistical process is complex. Because it’s trained with large amounts of information, there is the potential to hallucinate. So the challenge for the industry has been, how do I build generative AI applications that take advantage of the capabilities it brings, while driving down hallucinations so you can trust that information and use it to automate processes or support your employees more efficiently?
Ultimately, you have to control what goes into the model to control what comes out of it. One example is restricting the types of questions you can ask a model to only relevant topics, to avoid skewing the output it produces, and check that the output makes sense relative to the question that was asked. Once you have confidence that the model does what it is supposed to do, you can give it a set of data that speaks to the purpose of the model to provide a specific context. This drives down the hallucinations that can occur.
Recently, people are building agentic systems to be more workflow oriented. They’re breaking down the workflow into small discrete tasks and building agents that are good at a few things within the task. As an example, a task might be identifying the customer and another may be verifying their identity. Each step becomes more specific and therefore easier to manage. This makes it easier to put governance around that process.
R&I: Do legacy systems complicate or stall the adoption of AI platform or tools in the industry?
JK: Pressure to modernize is focused very much on improving customer experience overall. This has put pressure on the industry get data out of those legacy systems in a way that can power insights that allow for more personalization. That’s a differentiator in the industry – the ability to use data for the benefit of your customers.
Once you take advantage of those capabilities, you need to be able to be able to apply those insights to your to core business. If you react quickly to the lessons gleaned from your data, it leads to a better system and better customer interaction. Genitive AI has made some of those efforts easier. It can be used to help developers understand legacy platform mainframes, which may use old code. It can inspect the code and help to suggest some places to re-architect it.
R&I: How is generative AI assisting in the evaluation of risk or underwriting decisions?
JK: Certain decisions we will always need humans for. Generative AI is taking that information and packaging it to make those decisions easier. AI can scan systems, aggregate data, and bring it to those decision makers in a much more structured way. You might get a report summary or a satellite image or some other information that normally would have taken an analyst days or weeks to inspect, now you have it in a matter of hours. Its much more that the data that comes into the decision making process has a generative AI touch, rather than AI making risk decisions.
R&I: How does the incorporation of more advanced AI tools impact the need for technical expertise in the industry?
JK: Two years ago, from a traditional machine learning perspective, there was a hunger across the industry for talent that could bring a technical understanding of AI, along with knowledge of the insurance industry’s processes and data. But the tools themselves have gotten considerably better. It’s made these tools accessible to less technical users.
Our customers are most excited about that ability for business users to actually get more guidance from a technical assistance perspective and rapidly adapt. We’re seeing enablement of the business more broadly through generative AI technology.
It’s also important to note that you don’t have to build your own platforms. Large language models are hosted. We bring customers’ data to the model.
The industry isn’t necessarily looking for the right use cases for generative AI. The expertise the industry needs is more around building it in a way that addresses security, compliance, governance and scalability while driving the business forward. &

