Is Authenticity the New Corporate Security?

Is Authenticity the New Corporate Security?

In the rapidly evolving landscape of artificial intelligence, the line between real and synthetic media is blurring at an alarming rate. With the launch of models like Nano Banana Pro, the tells that once gave away deepfakes—malformed hands, nonsensical text—are vanishing, creating unprecedented challenges for corporate risk management. We sat down with Simon Glairy, a distinguished expert in Insurtech and AI-driven risk assessment, to explore this new frontier. Our conversation delves into the anatomy of sophisticated private-channel deepfake attacks, the strategic insurance responses being developed, the technological arms race between creation and detection, and the urgent need for new corporate protocols in an era where seeing, and hearing, is no longer believing.

You mentioned that early deepfake tells, like malformed fingers or text, are now being solved by models like Nano Banana Pro. Could you provide a step-by-step example of how a private-channel attack, such as a fraudulent Microsoft Teams call, might unfold and bypass traditional human checks?

Absolutely. It’s a scenario that keeps me up at night because it weaponizes trust in a very direct, human way. Imagine a sophisticated attacker targets your company’s accounts payable department. First, they use open-source intelligence to identify a key vendor and find a public video of that vendor’s CEO—maybe a 30-second clip from a conference. These models now only need about 10 seconds of audio to create a perfect vocal clone. The attacker then initiates a business email compromise, sending a plausible-looking email about a change in banking details. To bypass suspicion, the email suggests a quick Microsoft Teams call to confirm. When your employee joins the call, they see the vendor’s CEO and, more importantly, hear their voice perfectly, saying, “Hi, just wanted to personally confirm we’ve updated our payment information for the next invoice.” The video might be a little glitchy, but the voice is flawless, and that’s what we’re wired to trust. The human heuristic, the gut check, is satisfied, and millions of dollars are wired to the wrong account.

Coalition’s Deepfake Response Endorsement funds forensics and PR but not reputational damage itself. Can you walk me through the first 48 hours after a client reports a public deepfake incident, and explain the strategic thinking behind focusing on the response rather than the damage claim?

The first 48 hours are a frantic, high-stakes race against viral spread. The moment a client calls us about a public deepfake—say, a fabricated video of their CEO announcing a product recall—the clock starts ticking. Our first move, within the first few hours, is to get the media to our forensic analysis partners. They immediately begin dissecting it, looking for the model’s statistical fingerprints or metadata mismatches to build a case that it’s synthetic. Simultaneously, our legal support team is on standby. Once we have that initial proof of falsity, they fire off takedown requests to social media platforms and web hosts. While that’s happening, our PR specialists are working with the client to craft a clear, concise public statement and internal communications. The goal is to get ahead of the narrative, armed with forensic proof. The strategy here is about active crisis management. Quantifying reputational damage is a slow, messy process. The real, immediate value is in staunching the wound—killing the fake content at the source and controlling the message. By funding the response, we empower the client to mitigate the damage in real time, which is infinitely more effective than a payout months later.

You described cryptographic watermarking as a way to “flip the problem” to proving authenticity. What are the biggest technical or logistical hurdles to implementing this at a large corporation, and what would a rollout look like for a CEO’s communications team?

Flipping the burden of proof from “prove it’s fake” to “prove it’s real” is the endgame, but getting there is a massive undertaking. The biggest technical hurdle is the lack of a universal standard. For watermarking to work, you need an entire ecosystem—from the camera that captures the video to the platform that hosts it and the browser that plays it—to be able to embed and verify the cryptographic signature. It’s a classic chicken-and-egg problem. Logistically, it requires a complete overhaul of corporate workflows. You can’t have a CEO recording a critical message on their personal phone anymore. For a CEO’s communications team, a rollout would start small. They would be issued specific, secured devices for all official recordings. Before any video is published, it would go through a verification step to ensure the digital signature is intact. The company would then need to educate the public and the press, stating that all official video statements are cryptographically signed and should be considered unverified otherwise. It’s about building a new institutional norm from the ground up.

You noted that private deepfakes used for payment fraud are a greater concern than public ones because they require everyone to adopt detection tools. Based on current trends, can you share any anecdotes on how these impersonation attempts are becoming more sophisticated and successful?

The scale of the problem is what’s truly concerning with private deepfakes. To stop a viral public video, you just need a few key institutions like major news outlets and social platforms to adopt detection. To stop payment fraud, you need every single person who processes a transaction to have the tools and the training. We are already seeing a clear escalation. A few years ago, it was just a suspicious email. Now, we see multi-stage attacks. For example, an attacker might start with a classic business email compromise, but when the finance department hesitates, it’s followed up by a cloned voicemail from the “CEO” left on an employee’s phone, adding a layer of personal urgency. The attacker lifts that 10-second voice clip from an earnings call, and suddenly the request feels undeniably real. This combination of different attack vectors makes the fraud so much more potent and breaks down the skepticism we’re trying to build in our employees.

Given that we are in a “strange transition period” for media verification, what are two or three concrete, formal procedures a company’s risk management team should immediately implement for any audio or video they rely on for critical business decisions or public statements?

We’re in a period where our instincts are no longer reliable, so we must rely on formal process. First, companies need to implement a “zero-trust” policy for media. For any critical business decision prompted by audio or video—especially a fund transfer or a change in vendor details—verification cannot be optional. This means requiring a mandatory, out-of-band confirmation through a pre-established, trusted channel. If you get a Teams call from a supplier changing their bank account, you hang up and call them back on the phone number you have on file. Second, for their own public statements, they need to establish a clear source provenance protocol. This means documenting the chain of custody for all high-stakes media, knowing precisely which device was used to record it and when. This lays the groundwork for future adoption of cryptographic watermarking. Finally, they need to invest in and mandate the use of deepfake detection tools as part of their due diligence workflow, just as they would run a background check. Human eyes and ears are no longer enough.

What is your forecast for how this “strange transition period” in media verification will resolve itself over the next few years?

My forecast is that this transition will resolve not with a single silver-bullet technology, but with a fundamental rewiring of our institutional and corporate norms around trust. On the public-facing side, I believe we’ll see a bifurcation of medicontent that is cryptographically signed and provenanced, which will be treated as trustworthy by journalists and platforms, and a vast sea of unverified content that is treated with extreme skepticism. For private-channel communications, the arms race will escalate. Detection vendors are reporting accuracy rates above 95%, but the fakes will get better. The resolution here will be procedural. Just as two-factor authentication became standard for logging into our bank accounts, multi-factor verification for authorizing transactions or sharing sensitive data will become an ingrained, non-negotiable part of business operations. This “strange period” will end when we’ve successfully shifted our reliance from our fallible senses to robust, verifiable processes.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later