Application of AI in Insurance Claims Settlement: What’s Happening and What’s Next

By March 20, 2019 March 27th, 2019 Blogs

Industries all over the world are widely adopting AI technologies and the insurance industry has a bit of catching up to do. However, there is one area of insurance where AI has made a massive impact, and that is insurance claims settlement. I went as a speaker at the Pune Data Conference 2019 where a bunch of us from various companies gathered to exchange ideas and experiences about the way AI’s application is growing in insurance and how to handle this technological revolution.

Here are a few points that I discussed at the conference. I opened by discussing the challenges claims processing, most of which you may already be aware of:

  • Claims settlement is a time-consuming process due to the various levels of authentication and verification required before the claims payment can be released.
  • Claims processing is error-prone, mainly due to the human factor.
  • The cost of settling claims is upwards of 30% of the revenue of an insurance company.
  • Insurance fraud is a stark reality, and the insurers and their customers lose billions of dollars to it every year. The current estimate is at USD 40 billion for non-health insurance.
  • Customer dissatisfaction is a direct result of an inefficient claims settlement process. Add to this the cost of fraud that raises premiums, and you have customers ready to add to the churn.
  • With data getting digitized, the threat to the security of customer data also increased. Every day, insurance companies fend off an increasing level of cyberattacks. Protecting sensitive customer data and information is critical and also expensive.

How AI can Help Mitigate the Challenges

  • AI solutions can fast track claims by efficiently segmenting them to enable quick and accurate claims settlement and reduce claim adjustment expenses.
  • By evaluating various data parameters, it is possible to detect an anomaly in claims on the basis of customer behavior and reduce fraud.
  • Getting the right adjuster assigned to a particular claim is critical for settling it with the correct payout promptly.
  • Insurers can also use AI solutions to predict the subrogation potential of a claim.
  • As customer churn is a huge challenge for the industry, using AI for assessing customer behavior and use it to reduce the churn rate.

How We Developed the AI Models to Tackle Different Use Cases

We determined that a sprint-based plan was most suitable for AI Model build and validation. Typical sprints for each use case looks like below:

A Typical Sprint

From here, I dove deeper into explaining how they worked towards managing the biggest challenge in training neural networks – feature selection. It is imperative for a neural network to know and understand which data points it needs to consider and which are irrelevant.

By using a systematic data profiling program, we were able to bring down 10000+ data points to 100+ relevant ones.

I discussed the series of experiments we did to find out the best way to train the neural networks which included working with CNN, RNN (LSTM & GRU), Transformer, Attention, ELMo, BERT, and Word Embeddings for unstructured text and language modeling.

As the discussion moved towards machine learning, it became obvious that looking into the AI black box is also becoming important. The end-users of AI technology are increasingly looking to understand exactly how an algorithm reaches a conclusion. This understanding is necessary to ensure that the neural network is getting trained properly and is using the correct parameters and relevant data points. I demonstrated how one could look into the model learnings during training using TensorBoard and other model visualization tools.

From here, the challenge of unstructured data took center stage, and I was able to demonstrate the results that we have been able to garner by using deep learning models. In claims settlement, much information is recorded as unstructured data, such as photographs of the damaged vehicle, incident reports and receipts, emails between various parties, and most importantly, notes written by the adjuster in shorthand.

We further explored attention-based models and their significance in an interactive session. The point that emerged was that designing a proper multi-headed self-attention mechanism is one of the most effective methods of training sequence-based models. It also helps us with our unstructured data points.

We also went over the latest trends in the NLP overview. Ideas about applying Transfer Learning in NLP models were discussed, along with its past limitations that can now be overcome. The latest research coming out of Google and OpenAI in Deep Learning space was also a hot topic of discussion.

AI is slated to play a critical role in automation and improvement of daily insurer operations involving image and text processing. If it can be successfully demonstrated that an AI model can offer massive benefits to insurers such as reducing current costs, improving outcomes and thereby reducing future costs, and keeping their customers happy, the insurance industry may be more inclined to adopt the technology, even though it is a highly risk-averse industry.

Ankit Yadav

Ankit Yadav

Ankit Yadav is a deep learning researcher, who works on developing deep learning applications for insurance and life sciences domain. He is also a FastAI International fellow and AI Saturdays contributor. He likes to train neural nets on large datasets. Read More Posts

Leave a Reply