SHARE
SHARE
Summary

Generative AI (GenAI) has already profoundly impacted the insurance landscape, and the technology continues to reshape the industry paradigm with new use cases. In this article, we cover how this transformative technology is creating new efficiencies and innovation in the insurance ecosystem. We also explore some inherent GenAI challenges and how to overcome them with ethical and accountable implementation strategies.

The insurance landscape is undergoing a paradigm shift driven by the disruptive potential of generative AI (GenAI). This transformative technology, capable of mimicking human language, holds immense promise for an industry that is deeply regulated and designed around actuarial science for accurate risk management.

Gen AI promises to reshape how insurers operate, interact with customers, and manage risk. The paradigm shift will involve the insurance industry transcending from a ‘detect and repair’ mode1 to a ‘predict and prevent’ mode as Gen AI-based use cases are identified and put into play.

Key use cases – The low-hanging fruit

In an industry that’s highly regulated in nature, where Gen AI adoption is still in its infancy, the tendency is to pick use cases that offer the least resistance to adoption. Some examples that are quickly gaining traction include:

  • Submission and ingestion automation: Insurers have been using AI/ML to automate document ingestion for new submissions to fast-track underwriting. With Generative AI in the mix, the mathematical models and the language model are coming together to deliver a flawless experience. For instance, for underwriters getting submissions from various sources, GenAI can make it easy to run language translation, conduct data lookups, assess cyber scores for clients, and access both internal and external data stores for additional information, such as loss runs. Something that would take days or weeks is now compressed to a few minutes!
  • Claims and First Notice of Loss (FNOL) Automation: GenAI can make the cumbersome claims process easier for both claimants and insurers. For instance, GenAI chatbots can help customers submit the FNOL with appropriate data, guiding them in natural language, making the process effortless, and eliminating the need for multiple requests for information. Similarly, for insurers, these models can summarize claim applications, assist adjustors, determine liability, and generate legally binding and accurate subrogation documents if needed. The possibilities from cost and effort saved are phenomenal.
  • Workforce Productivity: Gen AI can augment the productivity of the insurance workforce in their ability to underwrite policies, process claims, and service clients. Productivity gains in customer service can translate to cost savings of 40% to 60%2. Here are three scenarios to consider.
    1. Search and retrieval – If customer executives have the right content at their fingertips, what changes? Text embedding and summarization models can provide customer executives the ability to respond better to customer queries. Sompo International’s Retail Property Team is utilizing Generative AI for search and retrieval enhancement, freeing up agents to better utilize their time on call3.
    2. Advisor planning – The availability of relevant content support in planning can directly impact sales. Northwestern Mutual’s financial advisors can look at the Next Best Action (NBA) tool, which recommends financial products to help clients achieve greater financial security4.
    3. AI co-pilots – Generative AI co-pilots provide advanced support and guidance across tasks and projects, enhancing employee effectiveness. American Family Insurance, for instance, is looking at ways AI can be used to free up agents from routine jobs. This makes them available for interactions that require a human touch.

It’s clear from these use cases that Gen AI in insurance is bringing in a significant shift in how organizations approach innovation, research, and operational efficiency. However, any new technology matures over time, and it takes time for initial creases to be ironed.

Navigating the Gen AI storm: Balancing innovation and risk

Integrating Gen AI into businesses comes with several challenges and risks. Firstly, there’s a risk of Gen AI creating wrong, biased, or outdated outputs, which could also lead to intellectual property issues. Then, these models themselves are complex, making it hard to understand how they come up with their results, which raises questions about their fairness and reliability. Using Gen AI involves handling a lot of data, which raises important ethical issues about privacy and who owns this data. In addition, adopting Gen AI into existing workflows requires big changes to how things are currently done. And as Gen AI keeps evolving quickly, it can disrupt existing business processes, requiring businesses to constantly adapt.

Despite these challenges, the Gen AI genie is out of the bottle and there is no turning back, only moving forward.

So, how can we navigate Gen AI’s potential when we are barely even scratching the surface of understanding it? The key lies in striking a balance between cautious adoption and bold exploration. Here are some essential strategies:

  • Human in the loop: Don’t expect Gen AI to replace human judgment overnight. Instead, focus on using it to augment human decision-making. For example, Gen AI co-pilots can provide support and insights to underwriters or claim adjusters while they make the final decision based on their experience.
  • Guardrails and oversight: Even as you add human oversight to prevent unintended consequences, establish clear boundaries for acceptable outputs. For instance, generative AI models can be instructed not to fabricate information beyond what’s provided in the document, dataset, or paragraph. In case the model cannot find an answer, it should clearly state that it lacks sufficient information.
  • Governance and regulation: Keep an eye on evolving regulations and proactively adhere to them. Go a step further in operating within the ethical and legal boundaries by anticipating and preparing for future frameworks.
  • Training and experimentation: Thoroughly train your models on high-quality, unbiased data and conduct rigorous pilot tests before deploying them in real-world scenarios. Also, train your people to use GenAI responsibly. For instance, Northwestern Mutual has an NM GPT for all its employees to play around with. In the process, employees discover the strengths and weaknesses of the technology
  • Continuous learning and monitoring: Don’t expect to “set it and forget it” with Gen AI. Regularly monitor and update your models to ensure they remain accurate, unbiased, and aligned with your evolving needs.

Loved what you read?

Get practical thought leadership articles on AI and Automation delivered to your inbox

Subscribe

Loved what you read?

Get practical thought leadership articles on AI and Automation delivered to your inbox

Subscribe

Four Guiding Pillars for Responsible Adoption in Insurance

  1. Fairness: Ensure that your models and processes avoid bias and proxy discrimination. This can help organizations ensure equitable treatment for all individuals.
  2. Accountability: Being open to regulatory scrutiny by ensuring the auditability of models helps in maintaining transparency and trust in the use of AI technologies.
  3. Sustainability: Balancing the impact of AI on society and ensuring model reliability, consistency, and robustness go a long way in creating sustainable AI usage.
  4. Transparency: Demonstrating the lineage from data to processing to outcomes, enabling auditability and readiness to present evidence to stakeholders.

The road ahead for Gen AI in insurance is brimming with possibilities. By harnessing its power responsibly, insurers can unlock remarkable benefits: unparalleled efficiency, deep customer engagement, and, ultimately, a more responsive and dynamic industry. In the words of Geoffrey Hinton, a pioneer in the field of AI, “This is a caterpillar waiting to become a butterfly.”

Disclaimer Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the respective institutions or funding agencies