Data Quality in Claims Adjudication

Claims Data Validation

Claims are one of the most important customer touchpoints for any Insurance carrier and the real test of a carrier’s commitment to quality of service. However, it is also the most labor-intensive operation in the insurance lifecycle as it requires accurate categorization and assessment of claims while sifting through massive amounts of data. Needless to say, the quality of data is of paramount importance while adjudicating claims in order to support fast and accurate claims processing.

Why It Matters

Claims adjudication is inherently a complex process because of the multitude of parties and data points involved; lack of quality data can lead to delays and ineffective management of critical business processes leading to sub-optimal results.

One of the most critical aspects that get impacted as a result of poor quality data is the speed of adjudication (cycle time), resulting in customer satisfaction (NPS) suffering immeasurably.

Inaccurate decision making can also have a significant impact on overall financial metrics (combined ratio, reserving) of a carrier. The net effect is that genuine claims take a lot longer, fraudulent claims are harder to catch and subrogation recovery is sub-optimal.

According to a recent survey of insurance companies, only 57% of participants felt they were leveraging their data analytics solutions effectively; nearly two-thirds (66%) of them expressed that data quality is the biggest challenge their data analytics program faces.

Data Quality Conundrum

There are many reasons for the accumulation of low-quality claims data, most of them arise from the way the data is collected through-out the lifecycle (at FNOL and subsequently) and maintained. Here are some of the common data quality issues and their reasons:

  • Inaccurate or incomplete contact information can result from duplicate customer or claimant records across various systems – Leading to difficulty in collecting information, verifying coverages and providing timely updates and payment.
  • Data stored in individual spreadsheets, lack of common data models, data structures and data definitions – Information vacuum occurs, data sharing becomes challenging, resulting in inaccurate adjudication, leading to more expenses, potentially bigger losses and dissatisfied customers.
  • Unstructured data stored in disparate systems, with varying degrees of granularity – Collating such information is challenging given that related metadata is usually missing or inaccurate impacting estimates, vendor recommendations and delays in accurate payments.
  • Inappropriate labeling of injury codes and use of inconsistent abbreviations in the notes – Leading to inaccurate decision making causing delays.
  • Data Input issues in Individual’s (customer, driver, participant, claimant) data e.g. telephone numbers, addresses, IDs, names, etc. – Inability to correlate information from various systems leads to delays in collecting information from various parties and adjudicating claims accurately.

The Business Impact

As we articulated earlier, poor quality data can have a significant negative impact on accurate claims adjudication. The impact this creates for a carrier is disproportionately large.

Inaccurate Claims reserving: Accurate claims reserving is a critical piece of the puzzle for any carrier to manage their finances and investments more effectively. Regulations in most countries require carriers to have enough cash reserves to cover losses; striking a balance between accurate cash reserves and investment capital becomes critical for profitability. Inaccurate reserving directly impacts investment reserves (if reserving is too high) or worst yet creates regulatory compliance issues (if reserving is too low).

Extended Cycle times: Typical cycle times range from a few days to a few weeks. A 10-20% increase in cycle time as a result of poor data quality not only compromises customer experience and consequently customer retention; but also results in increased Loss Adjustment Expenses (LAE) directly impacting the expense ratio.

Hard to identify Fraud: According to Coalition Against Fraud, 10%-20% of all claims payments are made for fraudulent claims, directly impacting the loss ratio of the insurer. Poor data quality can result in this percentage being higher by as much as 2-3% ($ figures could be in several millions for a typical mid-size carrier)

Sub-optimal Subrogation recovery: Subrogation recovery directly impacts the loss ratio; poor quality data can lead to difficulties in identifying subrogation opportunities and clearly figuring out responsible parties based on incident details. A $2000 lost subrogation opportunity directly relates to $2000 more in losses for a carrier.

The financial impact alone on enterprises due to poor quality data is far-reaching and deep; In 2017, Gartner estimated the average cost of poor quality data for any enterprise to be $15 million. In our view, the averages for insurers are significantly higher given the data centric nature of the business, as goods and services by insurance carriers are primarily data products. Given the size of the business impact, there’s increasing awareness within insurance companies to have processes and systems in place to tackle the data quality challenges.

In part two of this blog, we will discuss measures to improve data quality specifically for claims and also look at ways to leverage this data more effectively.

Claims Data Validation

Siddhartha Vowles

Siddhartha (Sid) has over 8 years of designing and customizing products and solutions in the insurance domain. He graduated from Carnegie Mellon University with a master's degree specializing in artificial intelligence. Read More Posts

Leave a Reply