According to the National Association of Subrogation Professionals (NASP) approximately 15% of all claims have subrogation opportunities that are missed, and 32% of recoveries are either not pursued or closed with zero collections. Such missed opportunities have a direct and significant impact on the combined ratios of insurers.
There are a number of reasons for such statistics; chief among them are:
- Limited Investigations: The number of claims that are analyzed at a deeper level for subrogation make up a very small percentage of overall claims, primarily due to limited bandwidth and operational cost structures.
- Limited Information Processed: A limitation in bandwidth also has the undesired impact on the amount of information processed by adjusters to determine subrogation potential, e.g. Negligible analysis of images for all claims or even tagged claims.
- Reliance on Training and Expertise: The process is largely dependent on the level of training, experience and expertise of the assigned adjuster; and varying degrees of expertise often lead to inconsistent results.
- Flag-based Models: Flag-based models/systems that exist in most insurance carriers do a great job with known situations and patterns, but do a poor job in handling uncertainty and newer patterns as they emerge over the years.
- Technology Limitations: Technologies that can leverage and generate insights from unstructured data like documents and images has only recently matured; therefore the adoption has lagged, leveraging these data assets, leading to suboptimal processing.
Evolution and Advancements in Computer Vision
Computer vision aims to create a computational model for human vision, the underlying idea being that such a system would be autonomous and perform tasks that human vision can perform or even surpass it in some cases. Computer vision is not new, ever since its early start in the 1960s, it has come a long way in terms of maturity; today neural networks like ResNet-50 can detect and classify more than 1000 categories of objects. Such advancements in the last 5 years have accelerated the adoption of computer vision in various real-world applications.
Leading companies like Tesla are using computer vision to solve massively complex problems like autonomous driving. Also, researchers at Northwestern University were able to use CT Scan images and computer vision to detect early signs of lung cancer, a full two years before any trained radiologist could. Real-world applications like these are accelerating and penetrating all industries and creating significant positive impact in human lives.
Claims adjudication and specifically Subrogation Detection lends itself perfectly as a use case for computer vision as the bulk of the information about a claim is either in statements, documents or incident images.
How Computer Vision can be Leveraged for Subrogation
The promise of touchless claims has been there for a while, but it hasn’t really come to fruition as the underlying technologies needed took time to mature. With the help of NLP, text analytics can be applied on notes in conjunction with computer vision for image analysis which can lead to early subrogation detection and risk segmentation with little triaging effort. Technologies like computer vision can finally be used in a production level system to adjudicate claims more efficiently. A solution that detects early subrogation needs to have three distinct capabilities:
a) Ability to determine the nature of incident in a claim
b) Accurately calculate liabilities for parties involved
c) Provide justification using evidence and state laws
A good computer vision and NLP model can augment all three capabilities, by processing incident images; generating deeper insights, and helping corroborate or refute statements and other documentary evidence like police reports etc. related to a claim.
Let’s look at “Determining the nature of an incident”. Computer vision models can look at incident images and classify: type of damage, location of damage and the extent of damage. This classification can then lead to a clear understanding of how the incident might have occurred, helping create a digital blueprint of the incident, which can easily be processed by other models to ultimately determine subrogation potential.
Secondly, “determining the liability” can also be easily accomplished, by using the learnings from the “nature of the claim”, applying patterns based on the damage in the incident images to decipher roles of each of the involved parties in the incident and then their potential liability. In addition, state laws can be applied based on the location to articulate liability % for each of the parties.
Finally, evidence such as party statements, police reports and blueprints of the incident, roles of the involved parties can be collated to prepare a strong subrogation file and demand letter to improve chances of collection. The insights from the images and the actual images themselves act as strong pieces of evidence that can expedite the processing and collection of a subrogation claim.
One of the biggest advantages of a solution built on foundational neural networks and computer vision is that it can learn new patterns on its own, without any additional training or intervention, reducing subrogation leakage significantly. These techniques not only work on incident images but can also be applied to a video recording (such as dashcam), which can also form the basis of strong evidence.
As we can see, computer vision in conjunction with NLP can deliver a massive advantage to insurers in detecting more subrogation while cutting down time and effort, while simultaneously improving accuracy and success rate in collection. We are just scratching the surface with this technology; as autonomous technology becomes mainstream, computer vision will become key in determining liability and processing subrogation efficiently and accurately.