Simon Glairy is a titan in the insurtech world, renowned for dissecting the friction between traditional insurance models and the rapid-fire evolution of digital risk. As the landscape of cyber threats shifts under the influence of generative AI and complex privacy laws, his perspective on risk management has become an essential compass for global enterprises. This conversation explores the widening chasm between exposure and protection, the necessity of specialized brokerage, and why the future of AI coverage isn’t a new product, but a fundamental shift in how we view operational processes.
Inconsistent policy language often makes comparing insurance quotes nearly impossible. How do you identify hidden gaps when definitions vary so widely between carriers, and what specific benchmarks should a risk manager use to ensure their coverage aligns with actual technical exposures?
The reality is that the cyber insurance market currently lacks any meaningful standardization, which leaves risk managers staring at a fragmented landscape where no two quotes are truly identical. Because each carrier utilizes its own unique definitions, limits, and exclusions, it becomes a Herculean task for a business leader to compare “apples to apples” when reviewing different quotes. To identify hidden gaps, one must look beyond the premium costs and scrutinize the affirmative language regarding specific attack vectors, as many exclusions are now hidden in plain sight. A reliable benchmark isn’t just a checklist of technical controls; it requires a deep dive into whether the policy language actually meets the sophistication of modern attacks we see daily. The emotional toll of discovering a coverage gap during a live breach is devastating for any executive, which is why we must move toward bespoke negotiations to ensure the language reflects the real-world technical exposures of the firm.
Cyber policies now undergo revisions as frequently as every quarter to keep pace with emerging threats. How can an organization ensure its broker is staying ahead of these rapid shifts, and what are the specific risks of relying on a generalist advisor versus a dedicated cyber specialist?
We have entered an era where policy revisions that used to happen every few years are now occurring on a quarterly basis, making the “set it and forget it” mentality a recipe for disaster. Relying on a generalist advisor creates a massive structural vulnerability because these individuals often lack the granular depth required to interpret how a subtle shift in a sub-limit might leave a company completely exposed. If a broker isn’t living in the cyber market daily, they are guaranteed to miss a critical update or a nuanced exclusion that could mean the difference between a paid claim and a total loss. The risk of the generalist is essentially the risk of ignorance; they might secure a lower price, but they often fail to advocate for the specific, evolving terms that high-threat environments demand. Organizations need to demand a specialist who can demonstrate continuous engagement with carrier updates, ensuring that the protection layer is as dynamic as the threats it aims to mitigate.
Underwriting is shifting focus from basic technical controls to complex data collection practices and third-party privacy litigation. What detailed questions should leadership expect regarding data governance, and how does a company demonstrate maturity when the legal landscape for data consent is constantly evolving?
Underwriters have moved far beyond asking if you have a firewall; they are now performing a deep-tissue scan of your data governance and privacy consent frameworks. Leadership should expect pointed, uncomfortable questions about exactly how they collect, store, and share information, particularly in light of the surge in third-party litigation surrounding data collection and consent. Demonstrating maturity in this space requires more than just a signed consent form; it requires showing a holistic strategy where legal, compliance, and IT departments are perfectly synchronized. Carriers are looking for evidence that a company understands its data lifecycle and has a proactive approach to evolving privacy laws rather than a reactive one. When a company can show that its data usage is governed by an enterprise-wide perspective, it signals to the insurer that the organization is a lower-risk profile despite the volatile legal environment.
Operational delays during a cyberattack often stem from a lack of pre-set protocols or indecision regarding ransomware payments. What specific steps should be included in an incident response plan, and how can pre-selecting panel firms or conducting conflict checks in advance reduce downtime?
The difference between a controlled recovery and a total operational collapse often comes down to the decisions made months before a breach occurs. A robust incident response plan must include pre-selected panel firms and completed conflict checks; waiting until an attack is underway to find a lawyer or a forensic team is a catastrophic waste of time. One of the most common failure points we see is the absence of a pre-defined stance on ransomware payments, which leads to agonizing indecision while the business remains offline. I recall a healthcare case where this internal hesitation allowed threat actors to escalate pressure directly onto patients, turning a technical crisis into a human tragedy. By establishing clear protocols and knowing exactly who to call the moment the screens go dark, an organization can drastically reduce the downtime that otherwise hemorrhages money and reputation.
AI risks are increasingly manifesting across multiple lines, including professional liability and employment practices, rather than just cyber policies. Why is a standalone AI policy often considered impractical, and how should organizations audit their existing coverage to ensure AI-driven discrimination or bodily injury claims are addressed?
There is a common misconception that we need a “special” policy for AI, but the truth is that AI is not a separate coverage type—it is an integrated process that touches every part of a business. Because AI risks manifest in so many different ways, from discriminatory hiring algorithms to bodily injury caused by automated systems, a standalone policy would be too narrow to be effective. Instead, organizations must audit their existing towers, including General Liability and Employment Practices Liability Insurance, to ensure that the definition of a “claim” or “occurrence” doesn’t inadvertently exclude AI-driven actions. We are seeing these risks bleed across multiple lines, which means the audit must be a cross-departmental effort to find and bridge any gaps in the existing language. Treating AI as a process allows us to embed protection within the current framework, ensuring that as the technology evolves, the coverage evolves alongside it rather than sitting in a silo.
What is your forecast for cyber insurance?
I anticipate that the industry will move away from the current focus on price optimization and toward a much more strategic, integrated model of risk transfer. We will see AI considerations become fully embedded into the DNA of every policy, moving from bespoke negotiations to standard, affirmative language across all lines. The gap between the fast-moving threat landscape and slow-moving insurance language will eventually narrow as carriers adopt more real-time underwriting tools to model dynamic risks. Ultimately, the winners in this market will be the organizations that treat cyber insurance not as a defensive purchase, but as a core component of their enterprise-wide resilience strategy. Expertise and preparation will become the primary currency, and those who rely on specialized advice will find themselves much better positioned to weather the storms of the next decade.
