Skip to main content
CRITICAL POINT EPISODE 59

Artificial intelligence and insurance, part 3: Global regulations and trends

ByRobert Eaton, Dr. Sven Wagner, and Lee Sarkin (Munich Re)
11 February 2025

Artificial intelligence (AI) has taken the world by storm—but around the world, countries, and insurance companies, are approaching the technology differently. On this special episode of Critical Point, experts from Munich Re and Milliman who work across five continents come together for a global conversation exploring how insurers and regulators are responding to AI. Plus, don’t miss Part 1 and Part 2 of this series on AI and insurance.

Announcer: This podcast is intended solely for educational purposes and presents information of a general nature. It is not intended to guide or determine any specific individual situation, and persons should consult qualified professionals before taking specific action. The views expressed in this podcast are those of the speakers and not those of Milliman.

Robert Eaton: Hello, and welcome to Critical Point, brought to you by Milliman. I’m Robert Eaton and I’ll be your host today. I’m a Milliman principal and consulting actuary, and I live in Tampa, Florida. I work in the life insurance and long-term care space.

On this episode of Critical Point, we’re going to return to our series on artificial intelligence, and today we’ll look at how insurers and countries around the world are approaching the technology. We’ll compare regulations in the United States, the EU, Singapore, and elsewhere. We’ll discuss the impact of these government regulations versus internal IT controls. And we’ll consider the potential financial risks of relying on automation.

I’m really excited to have a couple of global experts joining me for today’s discussion. From Munich Re, I have Lee Sarkin, the chief analytics officer for life and health, Asia, Middle East, and Africa. Hi, Lee.

Lee Sarkin: Hi, Rob. Great to be there. Thanks for having me.

Robert Eaton: Yeah. And from Milliman I have Sven Wagner, a principal in our Dusseldorf office. Hi, Sven.

Sven Wagner: Hi, Rob, thank you for inviting me for this episode.

Robert Eaton: Absolutely, thank you both for joining me. Let’s jump in and let’s start with a general question first, maybe to Lee first: How are countries starting to regulate artificial intelligence in your world?

AI regulations in Singapore, a pioneer in APAC, the Middle East, and Africa

Lee Sarkin: In Singapore, which is maybe the pioneer in the broader region I operate in, which is Asia-Pacific (APAC), Middle East, Africa, they started in 2018 with a more principle-based framework, moving into tools that are intended to implement the principles, and then in the last year or two adding onto that gen AI, the prior years focusing on traditional AI, but now really moving into a consolidated way forward, into industry AI handbooks covering traditional AI and gen AI. So there are banks, there are insurers, reinsurers, etc., that are participating in industry regulatory consortiums led by the monetary authority of Singapore. So a very well organized regulatory program of work. And I would say, the Singapore regulator has really highlighted the need for innovation—so regulation, but not at the expense of innovation.

Robert Eaton: Thank you for that. Sven, tell me what you see in in your area of the world.

How are European countries regulating AI and data privacy?

Sven Wagner: So what we see in Europe is that AI regulation is advancing rapidly. The proposed AI Act was published, and it aims to create a harmonized framework to ensure AI systems are safe, transparent, and respect fundamental rights of people. This includes categorizing AI applications based on their risk levels, ranging from a minimal risk to a very high risk, and imposing stricter requirements on especially these higher-risk applications.

Countries like Germany and France, the bigger ones in the EU, are also implementing their own national strategies to complement these EU-wide regulations. They often focus on ethical AI. This is a focus in the EU, where they ensure data privacy and fostering innovation in a controlled environment. In Germany we also have the Data Ethics Commission, which was established in 2018. They are also focusing on the ethical side of AI here. France has launched the National AI Strategy in 2018, which includes in it shifts to create ethical frameworks for AI development here again. And we also have the U.K., which is no longer part of the EU but which is also a significant player in the AI regulation landscape; for example, they have the AI Council and the Data Ethics Framework. So it’s a huge strategy, and all countries are working together here to bring this forward.

And what we also have in the EU, this is linked to AI, is the Digital Operational Resilience Act—the DORA Act. This was also developed by the EU to address the challenges posed by digitalization and threats from cyberattacks and disruptions, and it is also the goal to strengthen the operational resilience of the financial sector in the EU here.

Yes, and we also have the General Data Protection Regulation (GDPR) in Europe, which is a high standard for data privacy across the EU, so it’s a very regulated part, the AI part, in the EU here.

Lee Sarkin: To add to that, I think there are binding and nonbinding regulations, and maybe just to call out, up front, that certainly in Singapore, although not strictly binding yet, they’re treated as such by most mature financial service companies. Most of the AI solutions in production in this region are already conforming and are designed with that expectation in mind.

U.S. state responses to NAIC regulations on AI

Robert Eaton: Yeah, that’s really interesting. What I’d note in the United States, we have a very federated program where states will choose to adopt regulations. There is a National Association of Insurance Commissioners (NAIC) model bulletin. It’s been adopted in 17 states, but you know other states have kind of taken additional attention to the issue. In particular, Colorado has issued a regulation effective in 2023, so they were one of the first, which is looking at the sorts of models that insurance companies are going to use in order to do rating and investigating, or kind of looking into the AI components of those.

I think, most notably, both Pennsylvania and New York have come out with, you know, Pennsylvania’s notice focuses on AI systems. They kind of define that machine-based systems that can generate outputs like predictions. And New York, in particular, is interested in unlawful discrimination. Both letters and notices in Pennsylvania and New York encourage insurance companies to set up their own internal systems. And we see that that kind of establishment of a framework is what states are really encouraging insurance companies to do. It sounds like it’s sort of in the same vein as what you all have shared on the EU and Singapore.

Lee Sarkin: Maybe just to pick up one or two points there, Robert, I think you mentioned “AI system.” That’s really fundamental in that it encompasses the full solution, not just the AI model itself, but all the technology to deploy, to manage it in production, and maybe even extending to systems it’s integrated with, that collectively become the AI system. To the point about internal company governance frameworks: Many companies have done so already, and then have the same discussion internally about an AI system definition.

I think, for the actuaries listening to this episode, it was fascinating to hear our internal discourse having the view that generalized linear models and similar would not be seen as AI systems. Most of us that we’re doing that in pricing actuarial work for many years would know what those are. It’s more traditional machine learning models—booster trees and similar deep learning, and obviously gen AI now—that is considered an AI model. And obviously auto-underwriting engines, which are purely rule-based, deterministic, also falling outside the scope.

How are insurers around the world reacting to AI regulations?

Robert Eaton: Yeah. It kind of brings me to ask both of you: How do you see insurance companies? And you know, Lee, you can speak from your own company or some of what you observe across your clients, and Sven, you as well: How are insurance companies kind of reacting to these regulations? I imagine, you know, kind of a bare minimum approach is probably not true for everybody. Are there kind of internal guardrails, you know, compliance or actuarial, or maybe kind of third-party assessments that companies consider in addition to what’s required by the local regulation? Maybe Lee, you can start and Sven next?

Lee Sarkin: Sure, I think, depending in the world where you are, the maturity of companies varies. Some are still in the phase of exploration, development/proof of concept. Others are in production, running solutions already.

Responsible AI, or AI regulation, spans both development and production. It’s the full AI lifecycle. So you know, this is not something we take off preproduction. It’s an expectation of ongoing compliance, postproduction.

Whilst Europe has the risk category, other markets like Singapore don’t. But there’s an appreciation about the use case materiality. So the use of AI in underwriting, the stakes are quite high. So model errors in underwriting decisions translate into increased future claims, where there’s a risk of mispricing the business, expected future claims that actuaries are pricing for. From a purely internal financial risk management perspective, independent of regulation, insurers and reinsurers have had to quantify, minimize, and manage that risk, firstly to ensure a business case from AI and second to protect the bottom line and all stakeholders impacted by the solution.

So I think, where actuaries intersect with AI regulation is in financial risk management. The regulations are looking at the end customer, as they should. The public interest. Actuaries, I think, can really come at this from the financial soundness of the consequences of the solution. You know, what’s the economic cost of model error and who pays it?

I would say companies are still fairly early. Few have gone into production, I think, because, you know, my team has—again, we’ve had to take care of responsible AI: practices, tools, etc., pre- and postproduction.

The one I would emphasize is AI risk implications on the financial side. So all such AI models doing automated decisioning—and I stress that versus recommendation—those need actuarial signoff because of the financial consequences of model error. So we’ve had a few companies in the region where the appointed actuaries at the clients say, “I’ve had to have comprehensive reports and signoff.”

We’ve proven that traditional AI models can deliver value. But whether they have a business case is heavily dependent on managing the risk associated from the automation or customer experience improvements that they deliver.

And finally, I would just say, on technology, I think there’s still an underappreciation of the dependencies to manage risk in production and ensure the solution is sustainable. Things like monitoring and retraining of models. The technology stacks needed to do that. I think there are varied understandings of that. And as a result, I think we can expect to see differing levels of risk management.

Robert Eaton: It’s an interesting kind of notion you point out. You mentioned many stakeholders there, Lee. The regulation certainly has in mind the ultimate consumer as maybe its primary stakeholder, one of its primary stakeholders, and the others, as you mentioned, you know, the companies themselves and their shareholders. As actuaries, we’re always monitoring the financial health of the company. If we have new tools such as AI, it’s always going to be our responsibility to make sure that we integrate those in a way that is meaningful for our business.

Sven, any thoughts on your end?

Advantages and challenges of AI models for insurers

Sven Wagner: Sure, sure. I think it’s very similar to Lee. So they are in the starting phase, they’re investigating, they are looking into it. We have had a lot of discussions and projects and workshops with clients, and it was always a case that they want to have an effective understanding of the AI models, otherwise they would not accept those models here. And yeah, those have some, in the opinion of our clients, some advantages and challenges.

Firstly, transparency and accountability, that is always an issue, because AI systems are very complex and less transparent compared to traditional actuarial models, for example. Actuarial models are known. The people know how to work with them, and if they have an AI model, it’s not that common that they know it, and that is really an issue for them: That if they do not see the decisions, they are not explainable, and who is accountable at the end for the decisions of the models here?

They are also afraid of biases and fairness. So the AI systems have the potential to sustain or even intensify biases present in the data they are trained on. Surely we have also that in traditional models, but we have methods for validation and oversight that are well understood. So that is also an issue here, which they always report to us.

And thirdly, innovation versus regulation. There is a delicate balance between fostering innovation and ensuring regulation. So it goes in both directions. To one goes innovation, one regulation, and you have to find the sum of them to really have a higher quality and more reliable AI solution.

Machine learning models versus Excel spreadsheets

Lee Sarkin: Sven, maybe just to emphasize one or two points there. I think, as we stand today, really we have multiple dimensions of AI explainability. So most of the frameworks talk about transparency. And that’s where you get AI explainability falling under. That gets a lot of attention because of the need to justify, for example, underwriting decisions to the end customer, which is not new. We always had to under Treating Customers Fairly frameworks in many countries. From my perspective, we have well-established explainability methods today. And as someone having done actuarial work for a long time, building pricing bases and such models, I began to feel that actually I could explain a machine learning model better than vast Excel spreadsheet pricing bases I was developing out of experience data.

If you just consider for a moment a machine learning model end-to-end is coded, for example, in Python. The development path is fully transparent at every step of the process. Further, we have concepts of unseen data, cross-validation, ways of evaluating predictive power.

The three dimensions of expandability is the overall model; second, the patterns learned; and third, explaining the decision for a single customer. We can talk about the technical concepts for each of those three layers, but they very much exist for years now, and we use them extensively pre- and postproduction.

So I would say a common theme for this episode is the differing standards around the traditional models. And I include rules and actuarial in that versus AI or predictive. I think because of the regulation AI is getting a lot of attention. But when you look at the development, validation, and testing methods, they’re actually quite robust.

I think there’s a lot of food for thought here about actuarial models, where often the training data is the same as the testing data and we have a false sense of security around predictive power of best estimates when they’re applied in production when we write business on best estimates, whereas predictive models, when they go live, we have random holdouts, control groups from day one, and we don’t have to wait for claims. We have early feedback, and we have a pipeline of retraining data with proper technology and monitoring tools.

The actuarial control cycle is a very nice idea. But implementing it with process and tech I think lag significantly behind the same for predictive models. So I think there is a need to recalibrate the mindset. We are still hearing debates about black box. But really, I think the multiple layers of explainability on predictive models and the risk guardrails we can put around them are all quite mature.

Robert Eaton: Yeah, I fully follow that, Lee. And I think, also, it’s always been our responsibility to communicate the decisions that we make to all of our stakeholders as actuaries, whether that be the decisions to create pricing and rating in a certain way, the decisions to decline or ask for more information during the underwriting process—all of these have encountered scrutiny by business managers in our insurance companies for decades. So we’ve been held to this standard. And, as you mentioned, we’ve got some well-established ways to determine some of the explainability of the results there, even with some of the more sophisticated machine learning type models. And it’s an interesting point that the actuarial control cycle may actually lag behind what we’ve demanded from some of these more machine learning in advanced models.

Lee Sarkin: I am seeing in the industry the mindset of, let’s call it underwriter, actuary, domain experts, that there’s an aversion to predictive models from the perception that they’re not 100% accurate and that rule-based solutions are. Second, that predictive models take a lot more time and effort to build and deploy and manage in production than rule-based solutions.

I think this is dynamically changing, and I think the rule-based system is not free of error. If you build a rule-based approach to uplift underwriting automation or straight-through processing, there will be errors, which translate into increased claims that are not priced for without loadings. And there seems to be more of a relaxed mindset to say, well, somehow we’ll find a pragmatic approach to accept that, whereas predictive models would be subjected to high scrutiny, extreme quantification of risk, and monitoring postproduction. And I think it’s important to raise understanding that traditional models are not free of error.

Robert Eaton: It’s a really great point. Not only those rules-based, kind of traditional models might produce more claims in some cases, but also possibly denials and the opportunity cost that comes with those inadvertent denials.

Well, let me let me get into one final topic before we wrap up here. And that’s, companies like Milliman, like Munich Re, are investing substantially in AI, and we see our clients doing so as well. There is going to be kind of pressure from managers, from the C-suite to see the return on that investment. Where do you all see that AI may be more effective at current processes in delivering some of that return on investment (ROI)? Maybe I’d ask Sven and then Lee to wrap us up.

Where—and when—will AI deliver ROI for insurers?

Sven Wagner: Yeah, what we currently see in Europe or in Germany mainly is that the focus was always on efficiency. And when we discuss with clients, we’re always emphasizing that they also see the whole picture. So to move to successful AI systems, they must take also associated costs into account as well as implementation and maintenance costs. Development and maintaining AI systems is expensive, surely—from initial development and continuous update and monitoring, the cost can add up quickly. And that is what always makes them a little bit afraid when we discuss with them. The decision to build an own AI model, or to use an existing solution with some adjustment and integration, can make the difference in the company’s success at the end.

We also see the risk of poor decision-making, so when we discuss with them, AI systems are only as good as the data they are trained on. If the data is flawed or biased, the AI decisions can be equally flawed, leading to poor business outcomes, and that is what they are afraid of. Vice versa, the quality of their data and its sustainability for the purpose is the key to generate successful business decisions at the end.

So what we see is to maximize how insurance companies must adopt a balanced approach that combines AI innovations with robust risk management strategies. I think that is also what Lee discussed before. This involves continuous investment in compliance, transparency, cybersecurity, and workflow development. By proactively addressing these potential costs, insurers can harness the full potential of AI while safeguarding their financial stability, because that is always one of their main concerns here, and maintaining trust of their customer and stakeholders at the end.

I think ultimately the key is to leverage AI as an enabler rather than a replacement, because always replacing people, replacing processes is not where we are going to be in the near future in my opinion. We should, rather, focus on enablers than on replacement here. By doing so, insurers can harness the benefits of AI while minimizing the financial and operational risks at the end, because they have their processes in place, they try to enhance them with AI, but they are really not completely dependent on them. And this is where we see currently the focus in the discussions with our clients, and let’s see.

It’s a huge mind shift in the insurance company, in Germany, from regulated models, everyone knows everything, understands everything, to the AI models here, and this is a huge shift in the insurance company. Let’s see how it’s going to be in the future here.

Robert Eaton: Yeah, thank you, Sven. Lee, any thoughts on the ROI that you might see demanded?

Finding the right insurance business problem to solve with AI

Lee Sarkin: I think, firstly, the AI journey always begins at the same place, and that’s finding the right business problem to solve. And I think that’s often where things go wrong, where, “What are we solving? And why does it matter when we solve it?” is not properly understood.

So the “what are we solving?” is broadly understood in the life of business globally. It has to do with customer experience, the onboarding process when purchasing insurance, the friction in the journey to the customer. And so really, there’s a convergence across the U.S. and APAC, Middle East, Africa around instant decisioning and point of sale, and that’s being enabled by predictive models. The reduction of friction in the form of medical evidence, the lengthy application forms, which is partly being supported by use of external data. So, firstly, getting ROI on AI starts by solving the right problem and knowing the economics of that. Otherwise you’ll end up with solutions that don’t add value.

Once you have the right problem, how do you get a business case? As Sven said, it’s a trade-off of benefit and cost, and with any model, for example, increasing straight to processing, there’s a level of automation that corresponds to a level of increased future claims from model error. And so that’s a balancing act and a function of many factors: the data, the model robustness, the risk guardrails around the use of the model, and how you can maintain it and manage it in production.

So I think on traditional AI, the ROI in life, health insurance is very much around customer experience and nondisclosure or fraud detection. Obviously, with gen AI, we should mention that, that’s more of an efficiency play.

And I think it’s important to recognize that real automated decisioning is mostly going to come from traditional AI simply because of regulatory constraints and gen AI. That also links to what Sven said about the materiality of the use case. Currently, we are using models to only advantage the end customer—in other words, offer a better customer journey, instant access to standard underwriting decisions. Once you start predicting nonstandard decisions, declines, loadings, exclusions, and you have errors in the model prediction, that may disadvantage the end customer. Regulatory frameworks will be heavily critical of that, as they should. So we tended to focus on only advantaging the customer and achieving those benefits at minimal increased risk cost.

So yeah, I think gen AI will be about productivity gain in the workflow of the underwriter, of the claim assessor. And that’s why we are starting to see the concept of co-pilots coming about—the use of large language models to aid or support those domain experts in the workflow.

AI could offer augmentation for insurers

So I think, to Sven and, Robert, your point, we’re really looking at an augmentation. There may be implications for labor, but our starting point is augmentation and not replacement. Really, in fact, we are seeing domain experts upskilling themselves and becoming vital to the success. You asked about ROI. If you don’t have underwriters and actuaries with deep domain knowledge, or data scientists as well, the solutions may fail.

But I would say in closing, Robert, that there needs to be adequate investment by the C-level. So you know, understanding the foundations to put in place. Not just people, but technology, process, risk management, data pools, data strategy. These are requiring a certain level of effort, and they take time to put in place.

We didn’t discuss the timeline. When should a C-level expect monetization of AI? Is it 12 months? Is it a multiple-year time horizon? I think the foundations required here are different to traditional business lines like pure risk transfer, which have been around for decades.

Robert Eaton: These are great points, Sven and Lee. Going back real quick to something that Sven said, I do think, if we produce insurance products that don’t meet the trust of our ultimate consumers, we’re going to fail as an industry. Lee, you gave a couple of examples. If some of our, for instance, our decisions sort of disadvantage some of our customers, that’s going to reflect poorly on us. It’s going to come back to us in many ways, and trust is really the bedrock of our industry, and companies live and die by that.

To your point, Lee, on generative AI and language models, I definitely see this as kind of an evolving human-computer interface. So to the extent that we’re going to use those language models to better interact with co-pilots, with writing, with speaking, and interacting with our computers and the people around us, it’s going to be a huge process improvement, as you pointed out.

This has been a really fascinating conversation. I want to thank my guests again. Lee Sarkin of Munich Re and Sven Wagner of Milliman, thank you so much for joining me today. Please don’t miss the first two Critical Point episodes on artificial intelligence, which you can find in the show’s library and on our website, Milliman.com.

Once again, you’ve been listening to Critical Point, presented by Milliman. If you enjoyed this episode, please rate us five stars on Apple Podcasts or wherever you get your podcasts and share this episode with your colleagues. We’ll see you next time.


Lee Sarkin (Munich Re)

We’re here to help