Simon Glairy is a distinguished leader in the evolving landscape of Insurtech and cyber risk management, known for his clinical yet deeply human approach to digital security. With years of experience helping organizations navigate the treacherous waters of AI-driven fraud, he has become a pivotal voice for businesses seeking to protect their assets from sophisticated social engineering. This discussion delves into the psychological warfare of cybercrime, the necessity of immersive training, and the practical steps leaders must take when a crisis hits at 9:00 a.m. on a Tuesday. We explore the transition from passive insurance policies to active incident response strategies that can save a company from a total financial drain.
When an urgent wire transfer request appears in a CFO’s inbox early on a Tuesday morning, what specific psychological triggers cause decision-makers to panic? How does a pre-existing response plan mitigate these impulses, and what are the first three actions a team must take to regain control?
The primary psychological triggers at play are authority and artificial urgency, designed to bypass the rational mind and strike directly at a professional’s desire to be helpful and efficient. When a CFO sees an “urgent” request from the CEO at 9 a.m., the brain often skips the verification step to avoid the perceived social cost of delaying a high-priority task. This panic is exacerbated by the “first-hour” syndrome, where unprepared policyholders make their most catastrophic mistakes because they feel the clock is working against them. Having a pre-existing response plan mitigates this by providing a “autopilot” script that replaces fear with a sequence of practiced, clinical actions. To regain control, the team must first halt all pending outbound transactions, immediately verify the request through a secondary “out-of-band” communication channel, and formally activate their incident response team to begin a forensic audit.
Artificial intelligence can now clone an executive’s voice and spoof emails with startling accuracy to facilitate fraudulent transfers. Beyond looking at the sender’s address, what sophisticated verification steps should businesses implement, and how can they train staff to identify deepfake audio during a high-stakes request?
In a world where an AI-generated voice memo can sound indistinguishable from a real CEO, businesses must move beyond simple email header checks and implement multi-factor authorization for all financial movements. We recommend a “two-person” rule for any transfer exceeding a certain threshold, requiring a verbal code word that is never stored digitally and is changed monthly. To train staff against deepfake audio, we encourage listeners to pay attention to “digital artifacts” or a lack of emotional nuance that often persists in AI voices, even those that sound “uncomfortably realistic.” Staff should be taught to interrupt the caller with a non-sequitur or a specific question about a past internal event that an AI model wouldn’t have in its immediate training data. This shift from passive observation to active interrogation is the only way to stay ahead of a hacker who can drain an account before lunch.
Many organizations only realize they are unprepared after a breach occurs and their accounts are already being drained. What are the non-negotiable elements of an effective Incident Response Plan template, and how should these plans be tested to ensure they actually work during a live scenario?
An effective Incident Response Plan must include a clear “who-does-what” matrix that assigns specific legal, IT, and communication responsibilities to individuals so no one is guessing during a crisis. It is non-negotiable to have a pre-printed physical directory of emergency contacts, including your cyber insurance carrier and forensic experts, because your digital files may be inaccessible during an attack. We test these plans through immersive, live scenarios—stepping directly into the shoes of the victim—where the team is forced to react to a simulated breach in real-time. These tabletop exercises should not be treated as a lecture but as a “fire drill” for the digital age, revealing exactly where the communication breaks down when the pressure is at its peak. Without this practical experience, a response plan is just a stack of papers that will likely be ignored when the panic of a real hack sets in.
Insurance professionals often see clients make their worst decisions in the heat of a cyberattack. How should these advisors change their approach to client education, and what specific outcomes have you seen when organizations shift from passive learning to immersive, scenario-based training?
Advisors need to stop relying on boring slide decks and start providing experiences that stick with the client long after they leave the boardroom. When we move from passive learning to immersive training, like the deepfake scenarios featured at InsuranceFest, the emotional impact creates a visceral “muscle memory” that guides executives during a real event. I have seen organizations go from total chaos to a calm, 15-minute resolution simply because they had already “lived” through the scenario in a safe environment. This shift drastically reduces the likelihood of rushed, bad decisions that make a bad situation significantly worse. By treating cyber preparedness as a lived experience rather than a compliance checkbox, advisors can ensure their clients are truly ready for the new face of cybercrime.
What is your forecast for the evolution of AI-driven business email compromise over the next few years?
I forecast that AI-driven business email compromise will become hyper-personalized, utilizing data harvested from social media and previous leaks to create “long-con” simulations that are nearly impossible to detect. We will see attackers using AI to monitor real-time company news, timing their fraudulent requests to coincide with actual mergers or executive travel schedules to maximize the sense of legitimacy. The “deepfake twist” we see today is only the beginning; soon, we will face entire synthetic identities that can participate in video calls and interact in team chats for weeks before striking. To survive this, organizations must adopt a “zero-trust” culture where no digital interaction is taken at face value without a verified, offline confirmation.
