Using Artificial Intelligence (AI) to resolve and automate everyday, real-world business issues may still feel a thing of the future to many insurers, but the knowledge, tool kits and structures are already in place and ready to be exploited.
Practical Artificial Intelligence solutions can be thought of in three layers, the AI Business Use Case, the AI-Library of enablers and the AI-Operations layer.
The AI Business Use Case determines the business case that delivers a payback to the insurer. Using our AI-VDA (Value Driver Analysis) approach, we aim to identify and justify AI-Use Cases that deliver tangible ROI benefits within twelve months.
The AI-Library refers to the cluster of technologies that provide the technical solutions that we combine and exploit to deliver the AI-Use Case into live operation.
The AI-Operations layer ensures that the AI solution operates predictably to the required levels of explainability, accuracy and performance, within the expected computational, memory and cost envelopes identified in the business case.
Lumera has set out to build a full range of use cases addressing the whole Life and Pensions value chain. Right now, we offer four proven AI-Use Cases:
- Voice of the Customer uses sentiment analysis to map online reactions to products, advertising and corporate messaging and allows firms to gain instant insight into market sentiment, modifying products or messaging accordingly.
- Customer On-boarding uses Intelligent Document Processing (IDP) to fully automate almost all onboarding activity. It includes reading and verifying the identity and other required documentation from new customers with c.95% confidence and integrating this into the Policy Administration System (PAS) and/or other elements of the ecosystem.
- Risk of Lapse identifies customer segments that are at greatest risk of lapse. For instance, in one project the AI model found that recently divorced females in their thirties or forties was the segment most likely to lapse in premium payments.
- Optimised Claims Processing uses a combination of enablers such as object detection, de-noising and OCR (Optical Character Recognition) to automate the processing of death, accident or illness claims.
AI-Library of enablers
The Life and Pensions industry is sufficiently complex that generic AI tools will always struggle to perform and deliver required ROI. Lumera believes that the AI models need to recognise concepts like premium history, commissions, premium loans, partial withdrawals and surrenders, top-ups etc. We have spent several years investigating, experimenting and developing our understanding of these Life and Pensions-specific AI enablers, through our strategic partnership with CitAI, City, University of London’s Artificial Intelligence Research Centre.
We are now building a comprehensive library of enablers across the value chain. Today, we have a broad set of proven enablers for Object Detection, Natural Language Processing, Understanding and Inference, Optical Character Recognition, Predictive Analytics, Anomaly Detection, Document De-noising, Model Forecasting, Recommender Systems and Monitoring & Observability.
AI-Operations runs below and behind the use cases and AI models, but it is the heart of any AI solution, defining everything from the design of the models to the ongoing maintenance.
Even before a project begins, AI-Operations have organised the storage and management of the existing library of algorithms spanning the full range from statistical models through to deep learning, which the data scientists will access. Once the algorithms have been chosen, the Continuous Integration – Continuous Deployment (CI-CD) pipeline must be managed. AI-Operations need to determine how to expose the AI model to the PAS as a Service via an API. Different data sizes will require different computational power and memory. The models can be comprised of up to 20 million parameters. If the solution is to achieve the desired business outcomes in terms of accuracy, speed and cost, the memory footprint, latency, hardware cost each need to be optimised.
After deploying the Al model, it is likely that accuracy and performance will degrade over time. AI-Operations monitor the services, catching and correcting deteriorating accuracy. Deterioration will be of two types, endogenous decay from within the system, and exogenous changes, for instance changes in document formats, or the introduction of new documents.
AI-Operations monitor end-to-end logs to catch any predictable, endogenous errors. For example, we have implemented a set of error handlers for known issues like invalid dates.
A concept called Observability helps us catch a model when failing to deal with exogenous change; in effect we work from the big picture, spot where something is going wrong and then work through the process to identify the source.
We can use an Al technique called Anomaly Detection (AD) to analyse how to optimise the deployment environment and predict rates of degradation. AD learns how the models behave and how they deviate from the desired performance. In this way AI-Operations continuously tunes the AI models to maintain or even improve accuracy and performance.