AI and Insurance: Why Domain Expertise Will Matter Even More in 2019

By January 30, 2019 August 28th, 2019 Blogs
AI and insurance domain

Most insurance carriers have gone on to embrace AI and its applications in various aspects of their business processes. However, due to the slow-moving and risk-averse nature of the industry, the adoption of AI is still not happening at a rate where it can make a difference to consumers and deliver sufficiently significant value to the insurance companies.

Taking a closer at some of the causes for the feeling of uncertainty around AI technologies has brought forward a rather interesting point. When AI solutions were implemented, it was the consensus that people would be happy to see their data getting used optimally to streamline processes and dig out previously hidden insights.

AI technologies can learn, but who will teach them?

However, most of the solution providers and companies forgot to take into account the fact that while AI technologies like machine learning can definitely make processes like claims settlement more efficient, transparent, and accurate, they need time to “learn” to read data patterns. In the time before AI, when processes were manual, such as fraud identification and decision to litigate, the decision was made by an expert on a case-to-case basis.

For example, for a question about pursuing fraudulent claims, the expert drew on his or her experience to determine whether a) there was a just cause for initiating an investigation, b) if the cost of the investigation was within acceptable limits, and c) the overall chances of judgment going in the company’s favor. A newly-minted machine learning algorithm cannot have this expertise coded into it. Though it can learn to make such nuanced decisions once it understands the parameters and learns the factors it needs to consider to make a decision.

It is, therefore, quite apparent that we need human experts to teach our neural networks how they make their decisions. By asking domain experts to carve their reasoning and methods for making predictions or choices, we can go through millions of data points and focus on the ones that are relevant. The value of knowing these relevant parameters is very high. It significantly reduces the time required to train the machine learning algorithms.

New risks require new ways to assess them

The value of domain expertise goes up even higher as the nature of risk as defined by insurance changes around the world. For example, people are changing the way they use transport – use of taxi services like Uber and Lyft needs to be accounted for while drawing up risk parameters. Add self-driving cars to this mix, and the scenario becomes even more complex. This type of risk is just one example. New kinds of risks that may need coverage are emerging every day.

Since there is not enough data for machine learning solutions to train upon, domain experts will need to make assessments, identify the correct parameters, draw up the process flow, and teach algorithms how to arrive at the right decisions.

Zero social bias policy is essential for accurate training

As AI technologies can learn to recognize patterns and predict behavior, similarly they can also learn bias. Social bias is a fundamentally human trait that can be consciously or unconsciously be transferred to the machine learning algorithms. Therefore, these domain experts have an added responsibility to ensure that there is a check on the training process so that no social bias is introduced in the models.

If such checks are not present, the algorithms will become imprecise and end up presenting socially or legally inaccurate results. For example: If an expert training a neural network repeatedly flags claims by residents from a specific zip code, which turns out to be a low-income area, the bias that people from low-income area tend to file more inauthentic claims may creep into the algorithms. The expert may not even have wanted to pass on his or her personal bias, but it inadvertently happened due to the nature of the learning process.

Therefore, a system of checks and balances rooted in social ethics need to be in place to ensure accurate analyses by AI solutions.

Conclusion

Some studies are showing that investment in AI has already slowed and that companies are looking to verify the promised ROI of deployed AI technologies before taking their adoption drive further. At the same time, with increased understanding about AI’s role and capabilities, it looks like this year insurance companies will choose those applications of AI that matter to their business and see past the just the shine and glitter of AI.

To ensure that the AI solutions deliver both business value and customer satisfaction the industry leverages domain experts need to train the solutions and develop additional parameters for its continued learning.

Summary
AI and insurance domain
Article Name
AI and Insurance: Why Domain Expertise Will Matter Even More in 2019
Description
Most insurance carriers have gone on to embrace AI and its applications in various aspects of their business processes. However, due to the slow-moving and risk-averse nature of the industry, the adoption of AI is still not happening at a rate where it can make a difference to consumers and deliver sufficiently significant value to the insurance companies.
Author
Publisher Name
Insuranalytics
Publisher Logo
AI and insurance domain

Varun Chutani

Varun Chutani is a solution architect at insuranalytics.ai with 11 + years of experience in the Insurance P&C industry, working on product development, actuarial modeling, and AI/ML solutions. He possess a rare combination of insurance business knowledge along with strong technology know-how. Read More Posts

Leave a Reply