From Hype to Impact: The Insurance Industry’s AI Inflection Point
As insurers shift from experimentation to execution, the focus is on achieving tangible value from AI investments.

Generative AI (GenAI) has captivated the insurance industry, sparking intense discussions about its transformative potential. In 2022 and 2023, insurers and their partners endlessly discussed GenAI’s impact, but as the dust settles, it's clear that turning potential into performance is no small task. In 2024, organizations moved from talk to action, experimenting with proofs of concept (POCs) and pilot programs. According to IDC, the average insurer deployed 24 POCs in 2024. However, most are struggling to achieve the anticipated value – only 68% report their POCs met the intended KPIs and a measly 8% were successfully integrated into production.
At the same time, for the first time in years and the first time since Gen AI hype took over every insurance talk track, underwriting profitability is turning a corner to positive, expanding liquidity and opening the door to investment in the alluring promises of AI-driven returns. As the industry looks ahead to 2025, the focus is pivoting from exploration and ideation to formalized business case development, execution, and value recognition, driven by a need to realize tangible value from AI investments in a shifting economic landscape.
AI’s potential is undeniable, but its successful execution hinges on the foundation it’s built upon: data. Poor quality, fragmented systems, and siloed processes frequently undermine AI deployments, leaving insurers stuck in a cycle of experimentation without impact. A new approach is needed—one that doesn’t replace existing master data management (MDM) or data fabric initiatives but accelerates them and provides lift in the interim. Modern data architectures largely lack a decision intelligence layer in between messy, siloed data stores and data-driven applications, and Quantexa’s Contextual Fabric serves as this bridge: integrating, resolving, and contextualizing data to unlock AI’s full power.
2024: Experimentation and limited success
As mentioned above, only 8% of Gen AI POCs were successfully integrated into production. So why the disconnect?
The top barriers cited across recent reports from IDC, EY, Baker Tilly, and Deloitte include:
Data quality issues: Unstructured data, silos, and legacy systems prevent AI models from delivering reliable results. MDM efforts have been deployed against these issues for years, but data proliferation, the continual introduction and evolution of new data, as well as data quality issues, has made it hard for MDM efforts to keep up.
Regulatory uncertainty: AI regulations are emerging slowly and steadily, and many insurers are hesitant to jump headfirst into investments which may be problematic in the following months. Especially where data quality issues abound, insurers want to ensure their decisions are explainable and based on sound data.
Fragmented initiatives: Many organizations succumbed to “random acts of AI,” with disparate teams pursuing overlapping or poorly aligned projects. Efforts are underway to standardize and govern these efforts, but the promises of AI benefits have driven individual actors to run independently, and organizational change faces inertia, especially within large insurers.
Talent shortages: A lack of technical expertise in AI model development and governance further complicates scaling efforts. Much of the top tech talent tends to look toward big tech for their career opportunities, rather than 100+ year old insurance institutions. This puts insurers on the back foot, needing to invest significantly in both the technologies and the talent to develop and leverage that tech.
As one panelist put it during a recent Endava webinar including technology leaders from EMPLOYERS, Liberty Mutual, and Marsh McLennan, “Organizations are no longer struggling to imagine the value AI can create—they’re struggling to execute in ways that deliver that value.”
2025: A turning point for AI in insurance
While 2024 highlighted the limitations of fragmented AI efforts, 2025 offers a chance to learn from these challenges. With underwriting profitability beginning to recover across the industry and organizations allocating more capital to growth initiatives, 2025 presents an opportunity to reset and refocus AI efforts. The industry’s attention is shifting toward structured, high-impact use cases that promise measurable results. Notable trends include:
Focusing on the right use cases:
Business leaders are increasingly asking, ‘What are the most valuable use cases, and what is our fastest path to value?’ This is driving a shift toward targeted use cases where measurable ROI is clear and achievable. A survey by EY-Parthenon found that 69% of insurers are prioritizing AI initiatives targeting specific areas of the value chain, with enhanced underwriting, predictive risk assessments, and decision automation topping the list. Bolt-on GenAI chatbots, while initially popular due to their “ease” of building into existing solutions, are being deprioritized in favor of more integrated use cases, leveraging broader internal data and domain context to augment generically-trained LLMs.Centralizing AI governance:
To combat the pitfalls of disjointed initiatives, more insurers are appointing “AI orchestrators” to align strategies across functions, ensure adherance to approved AI reference architectures, unify data efforts, and track value realization. This centralized approach ensures that AI deployments address enterprise-level priorities rather than isolated departmental needs.Doubling down on data foundations:
Insurers increasingly recognize that AI is only as good as the data it’s built upon. Improving data quality, integrating siloed systems, and developing “data-as-a-product” architectures are becoming prerequisites for successful AI adoption. According to Deloitte, a growing number of insurers are transitioning to data mesh architectures to balance scalability with use case-specific flexibility. Furthermore, data issues aren’t just technical—they’re strategic. For example, while handling underwriting submissions, unstructured customer data hidden in claims notes or incomplete views of commercial profiles might mean missed opportunities to upsell or cross-sell, impacting growth metrics directly tied to leadership priorities.Investing in talent:
Beyond hiring technical experts, insurers are emphasizing cross-functional collaboration and upskilling existing employees to work alongside AI. Zurich Insurance, for example, is leveraging analytics to map skill gaps and curate development opportunities for its workforce.
The role of Generative AI
GenAI remains a key focus; EY-Parthenon research found 83% of insurers are actively exploring or investing in it. While chatbot use cases dominated initial deployments, attention is now shifting to areas like:
User experience: Leveraging AI to understand user profiles, needs, and interests to personalize experiences for customers and partners.
Predictive risk assessments: Leveraging generative AI to synthesize vast data sets and enhance risk modeling.
Enhanced underwriting: Using AI to extract insights from unstructured data (e.g., contracts, reports) and streamline decision-making processes.
Operational efficiency: Automating routine tasks to free up human resources for higher-value activities.
Despite its promise, GenAI comes with unique challenges, including accuracy concerns, regulatory uncertainty, and the risk of propagating biases in data. Addressing these challenges requires a holistic approach that balances innovation with robust governance.
Insurers are realizing that GenAI’s effectiveness depends on robust foundational data and the ability to retrieve contextually relevant insights—an area where Quantexa’s Contextual Fabric excels. By enabling retrieval-augmented generation (RAG), insurers can ensure GenAI applications like underwriting, service, and claims copilots, personalized content prefills, automated documentation tools, and talent training models are both accurate and actionable.
Data as the key to AI’s success
At the heart of every AI challenge lies a data challenge. Poor data quality, fragmented systems, and inconsistent governance continue to impede progress. Addressing these issues is no small task, but the payoff is significant: cleaner, more accessible data enables insurers to extract actionable insights, enhance decision-making, and scale AI solutions effectively.
For example, a multi-line, global Tier 1 came to Quantexa looking to drive growth but was hamstrung by siloed data across lines. With our Contextual Fabric, we were able to connect personal and commercial lines, building a holistic view of prospects and opportunities. By leveraging this enhanced, contextualized view of their data, the client was able to increase in up-sell & cross-sell conversion ratios by more than 50%, and generated an additional $200m in revenue in one year.
How Quantexa can help
Breaking down silos:
By connecting disparate data sources across lines of business, region, and functional area, Quantexa helps insurers eliminate redundancies and unlock the full value of their information assets, paving the way for cohesive AI strategies. Our platform has proven its ability to scale up to 60B records, so no data ecosystem is too large for us to handle.Data integration and contextualization:
Quantexa’s ability to resolve fragmented data across silos provides insurers with a holistic view of their operations, customers, and risks. This enables better model training on the data science side, and AI solutions can be deployed against Quantexa’s contextual fabric to ensure retrieval augmented generative AI solutions point to deduplicated and contextualized data rather than isolated, messy data sources.Enhancing AI governance:
Our solutions support transparent, accountable AI governance by ensuring that models are fed with accurate, reliable data which can hold up to audits. Our platform allows insurers to see clearly which entities have been merged, from which sources, and through what data points. Furthermore, we enable interrogable, explainable models to be built on rich analytics features typically challenging to leverage without black box models, ensuring regulatory compliance is simple to maintain while optimizing the accuracy of models.Accelerating and enhancing data management:
By showcasing how data stores are merged and entities are resolved, Quantexa provides clean breadcrumbs for data engineers to follow when conducting complex data mastering. Our platform enables data analysts and engineers to clearly identify what data may serve best as the source of truth, and how to build merge matching logic into core data pipelines.Maximize ROI from high-impact use cases:
From underwriting to claims and customer engagement, Quantexa equips insurers with the tools they need to execute targeted AI initiatives that deliver measurable ROI. We do this through one core dynamic graph instance, allowing us to build once and deploy across varying use cases. While each may have their own resolution and contextual linkage requirements, our solutions ultimately reduce the compute costs per use case, enabling insurers to scale their AI efforts more meaningfully.
Importantly, these benefits have been seen in action by the organizations that we work with. For example, Chubb, like many global multi-line insurers, suffers from significant data sprawl, with regional, functional, and product line silos. They sought to improve the effectiveness of decision-making across their value chain, but struggled to connect the relevant data needed to power effective decisions.
In 10 weeks, Quantexa deployed an MVP connecting ~10 data sources, including internal policy and claims data along with Verisk, D&B, and NICB data to power claims analytics. This resulted in 30% to 62% record deduplication across various record types and 90%+ efficiency gains in complex claim case
From potential to performance
The insurance industry’s journey with AI is entering a critical phase. Success won’t come from isolated experiments or scattered investments but from building strong, scalable foundations. Organizations that focus on foundational elements—data quality, governance, and strategic alignment—will be well-positioned to realize AI’s transformative potential. For insurers and their partners, the next chapter is not about “random acts of AI” but about deliberate, data-driven strategies underpinned by holistic views of clean data. With Quantexa’s Contextual Fabric, the first Decision Intelligence layer in the market, insurers can move beyond POCs to create real, measurable impact—turning data into decisions and AI into action.
Quantexa is ready to be a key partner in this journey, helping insurers navigate the complexities of AI adoption and unlock the insights hidden in their data. As such, we have put together an “AI-Ready Data Assessment” framework, which we would be happy to work with your organization to deploy. Let’s move from discussing AI’s potential to deploying its full power—together.
