I’m thrilled to sit down with Simon Glairy, a renowned expert in insurance and Insurtech, whose deep knowledge of risk management and AI-driven risk assessment has made him a leading voice in navigating the complexities of emerging technologies in the insurance landscape. In this interview, we explore how AI is reshaping commercial insurance, from its integration into cyber policies to the challenges of broad exclusions in other lines, the potential for standalone AI coverage, and the evolving strategies businesses must adopt to manage these new risks. Join us for an insightful conversation on the intersection of innovation and protection.
How are cyber insurance carriers addressing AI-related threats in their policies today?
Cyber insurance carriers are largely taking a proactive stance when it comes to AI-related threats. Rather than pulling back, many are adding specific endorsements to their policies to make it clear that they intend to cover losses caused by AI-driven attacks. This approach reflects an understanding that AI doesn’t necessarily create entirely new risks but amplifies existing ones, like fraud or data breaches, and they’re choosing to adapt rather than exclude.
What drives most cyber insurers to reinforce coverage for AI-driven attacks instead of excluding them?
I think it comes down to market demand and the recognition that AI-driven attacks are becoming a core part of the cyber threat landscape. Insurers see that excluding these risks would make their policies less competitive, especially as businesses face increasingly sophisticated threats like deepfakes. Reinforcing coverage also aligns with the industry’s broader goal of providing clarity and support to policyholders in a rapidly evolving digital environment.
Can you describe how AI-fueled attacks, such as deepfakes or social engineering fraud, integrate into the broader cyber risk landscape?
AI-fueled attacks are essentially a turbocharged version of risks we’ve seen for years. Deepfakes, for instance, are just a more advanced form of impersonation or social engineering fraud, where attackers manipulate trust to gain access or steal funds. They fit into the existing landscape by exploiting the same vulnerabilities—human error, weak verification processes—but with greater precision and scale, making them harder to detect and prevent.
Outside of cyber insurance, where are you seeing the most significant AI exclusions emerging?
The most notable AI exclusions are popping up in lines like directors and officers (D&O), errors and omissions (E&O), and other management liability coverages. These aren’t areas traditionally tied to technology risks, which makes the broad language of some exclusions particularly concerning. Carriers are starting to limit exposure to AI-related decision-making or reliance, and it’s creating uncertainty for businesses in these sectors.
What makes some of these broad AI exclusions in non-cyber policies so troubling for businesses?
The issue lies in the vague and expansive wording of these exclusions. Some policies define AI so broadly that almost any use or dependence on it could trigger a denial of coverage. This lack of specificity means businesses might think they’re covered for certain risks, only to find out after a loss that their policy doesn’t apply, leaving them exposed to significant financial and legal consequences.
How are these wide-ranging AI exclusions impacting businesses in areas like D&O and E&O right now?
These exclusions are creating a ripple effect. Businesses are finding themselves reevaluating their risk exposure, especially in areas like decision-making processes that rely on AI tools. There’s a growing concern about potential gaps in coverage for lawsuits or claims tied to AI misuse or failure, which could hit executives and professionals hard. It’s pushing companies to scrutinize their policies more closely and, in some cases, seek additional protections.
Do you think businesses are fully aware of the implications of these AI exclusions when they’re included in their policies?
Honestly, I don’t think many are. The language in these exclusions can be dense and ambiguous, and unless a broker or risk manager explicitly flags the issue, businesses might not realize the extent of what’s being excluded. There’s a real need for better education and transparency in the industry to ensure clients understand how these clauses could affect their coverage.
Why do you believe AI risks are unlikely to develop into a standalone line of insurance, unlike cyber coverage?
AI risks are so intertwined with existing exposures—think privacy, regulatory issues, or even physical damage from system failures—that creating a standalone product feels redundant. The industry already has frameworks in place through cyber and other lines that can be adapted. It’s more about modernizing those frameworks than carving out a completely separate category.
How do you envision AI risks being woven into existing insurance products over the coming years?
I see AI risks being gradually integrated into current lines through endorsements and updated policy language. Cyber policies, for instance, will likely continue to evolve to explicitly cover AI-driven threats, while D&O and E&O might see more tailored exclusions or riders. The goal will be to balance coverage with clarity, ensuring that insurers can manage their exposure while still meeting client needs.
Are there specific industries or roles that might require unique coverage solutions for AI-related risks?
Absolutely, AI developers and tech companies at the forefront of creating these tools have unique exposures. They face potential liability for errors in their algorithms or misuse of their products, which could warrant specialized E&O coverage. Industries heavily reliant on AI for decision-making, like healthcare or finance, might also need customized solutions to address sector-specific risks.
What are the biggest challenges in designing insurance products that blend AI risks with other exposures?
One major challenge is defining the scope of AI risks—where do they start and stop? There’s also the issue of quantifying these risks, as AI’s impact can be unpredictable and far-reaching. Insurers need to balance offering comprehensive coverage with managing their own exposure, which requires clear data and underwriting guidelines. Plus, regulatory uncertainty around AI adds another layer of complexity.
Can you share insights on the limited AI-specific insurance products currently available in the market?
There are very few AI-specific products out there—probably fewer than a handful. These are niche offerings designed to cover certain legal or financial harms tied to AI use, like liability for algorithmic bias or data misuse. They’re a starting point, but they’re far from comprehensive, often leaving significant gaps that businesses need to address through other means.
What major gaps in coverage do these AI-specific products fail to address?
Most of these products don’t touch on critical areas like bodily injury or property damage that could result from AI failures—think autonomous systems malfunctioning in a factory or a self-driving car causing an accident. These are real risks that fall outside the scope of current offerings, meaning businesses still face substantial exposure in operational contexts.
How are AI exclusions influencing businesses to rethink their approach to risk management?
AI exclusions are sparking important conversations about risk management. Businesses are being forced to assess how they use AI, from implementing acceptable use policies to restricting access and training employees on safe practices. It’s a reminder that technology, while powerful, needs guardrails to prevent misuse or unintended consequences that could lead to uncovered losses.
What is your forecast for the evolution of AI-specific insurance solutions in the next few years?
I expect we’ll see a gradual expansion of AI-specific solutions, but not a standalone market. Insurers will likely refine endorsements within existing lines to address AI risks more explicitly, while niche products for high-exposure industries like tech development will grow. The bigger shift will be in underwriting—expect more focus on a company’s processes and controls around AI use as a condition of coverage. We’re in for an interesting period of adaptation as the industry catches up with the technology.