As the insurance landscape shifts from experimental “pilot mania” toward large-scale implementation, Simon Glairy stands as a pivotal voice in navigating this digital evolution. With a seasoned background in risk management and AI-driven assessment, Simon has witnessed firsthand how technology is moving from the fringes of innovation labs into the very core of global underwriting and claims operations. Today, he shares his insights on bridging the gap between small-scale tests and enterprise-wide success, emphasizing that the true value of artificial intelligence lies not in the novelty of the tool, but in the measurable outcomes and trust it builds with the end consumer.
Transitioning from localized experiments to enterprise-wide implementation requires a significant shift in leadership mindset. How do you distinguish a successful pilot from a truly scalable capability, and what specific commitments must executives make to ensure these tools transform core business outcomes rather than remaining isolated tests?
A successful pilot is often characterized by a controlled environment where we learn what works, but a scalable capability is defined by its integration into our core business DNA. To move beyond the initial excitement, executives must commit to a culture of risk-taking and long-term investment rather than just chasing the next shiny technology. At Swiss Re, for example, we handle over 40,000 claims annually, and scale means embedding AI into that high-volume flow so it becomes the standard way of working. Leaders must ensure that these tools are not just “add-ons” but are fundamentally changing outcomes like payout speed and accuracy. It requires a shift from asking “what can this tech do?” to “how does this reinvent our economic value?”
Claims and underwriting are often cited as high-impact areas where automation can provide a “moment of truth” for customers. How can firms accelerate these processes without compromising fraud detection, and what practical steps are necessary to move decision times from days down to just a few minutes?
The acceleration of these processes is all about creating a seamless bridge between data ingestion and decision-making. We are seeing a transformation where clients can now access underwriting decisions in minutes instead of the traditional hours or even days, which significantly enhances the customer experience. To maintain the integrity of fraud detection during this speed-up, we utilize AI to scan unstructured data and identify patterns that the human eye might miss. The practical step involves deploying hundreds of specific use cases that cumulatively improve the accuracy of our risk assessments. By automating the routine verification steps, we free up our experts to focus on complex cases, ensuring that the “moment of truth”—the claim payout—is both rapid and secure.
Legacy infrastructure is frequently blamed for slowing digital progress, yet it remains the reality for most established organizations. How should companies leverage their existing tech debt as a foundation for AI, and what are the specific risks of scaling technology before establishing rigorous data governance?
Legacy systems are an unavoidable reality for any company with a long history, as data often sits in non-native AI environments, but this should never be used as an excuse for stagnation. We must view our hybrid environments and tech debt as a foundation to build upon rather than a barrier to break down. The real danger lies in scaling too quickly without a mature governance framework, as poor data quality will only lead to unreliable AI outputs. For instance, Swiss Re spent eight years investing heavily in data storage, accessibility, and quality to ensure that the AI has a solid base. Without rigorous monitoring and testing to ensure models behave correctly, you risk eroding the fundamental trust that the entire insurance industry is built upon.
Success in this field often hinges on bridging the gap between data scientists and domain experts like actuaries. What specific training or cultural strategies facilitate this collaboration, and how do you ensure that internal productivity gains eventually translate into a measurably better experience for the end customer?
The most effective strategy is to create a shared language between technical teams and those with deep domain expertise, which is why initiatives like an AI academy are so vital. When actuaries and data scientists work in the same “overlap” zone, they can ensure that AI models are not just mathematically sound but also relevant to underwriting needs. While many gains have been internal so far—such as streamlining workflows and empowering employees—these productivity boosts are the engine that drives better customer service. By reducing the administrative burden on our staff, we allow them to provide more personalized and efficient interactions. Ultimately, a more productive internal process leads to faster responses and more transparent pricing for the policyholder.
Maintaining customer trust is essential when automated models handle sensitive financial decisions. What frameworks should be used to monitor and test these models for behavioral consistency over time, and what are the trade-offs between increasing system autonomy and maintaining human oversight?
Trust is our most valuable currency, and if we lose it, we lose everything, which is why a robust monitoring framework is non-negotiable. We must continuously test models to ensure they remain consistent and unbiased, especially as they handle sensitive financial data over long periods. The trade-off between autonomy and oversight is a delicate balance; while increasing autonomy allows us to process 40,000 claims more efficiently, human intervention is still necessary for high-stakes or edge-case scenarios. We use iterative testing to verify that the technology meets our specific business needs and creates real economic value. Maintaining a “human-in-the-loop” approach for complex decisions ensures that our automated systems stay aligned with our ethical standards and professional judgment.
What is your forecast for AI in the insurance industry?
My forecast is that the industry will move away from “pilot mania” toward a period of consolidation where the winners are defined by their ability to achieve true enterprise scale. We will see AI become invisible, moving from a highlighted “feature” to a standard component of every underwriting and claims process. I expect to see a massive shift where the cumulative effect of hundreds of small AI use cases creates a more resilient, responsive, and data-driven industry. Within the next few years, the gap between those who invested in data governance and those who only ran surface-level experiments will become an unbridgeable chasm. Success will no longer be measured by the number of robots we have, but by the speed and trust we deliver to our customers.
