Chatbots Ignite New Wave of Cyber Privacy Lawsuits

I’m thrilled to sit down with Simon Glairy, a distinguished expert in insurance and Insurtech, whose deep knowledge of risk management and AI-driven risk assessment makes him the perfect person to guide us through the complex landscape of cyber liability. Today, we’re diving into the emerging challenges posed by chatbots, a technology once hailed for boosting customer engagement but now at the forefront of privacy litigation and regulatory scrutiny. Our conversation explores the unique risks chatbots introduce, the legal frameworks scrutinizing their use, the critical role of consent in data collection, industry-specific vulnerabilities, and the murky waters of AI training practices. Let’s unpack how these innovations are reshaping the cyber risk battlefield.

Can you walk us through what makes chatbots such a significant emerging risk in the world of cyber liability?

Absolutely. Chatbots are a unique risk because they interact with users in real time, capturing data as conversations unfold. Unlike static tools, they often collect highly personal or sensitive information without users fully understanding what’s happening behind the scenes. This immediacy and lack of transparency create a perfect storm for privacy violations. Additionally, many companies integrate third-party chatbot tools without fully vetting how data is handled, stored, or shared, which amplifies the risk of litigation. Insurers are seeing these issues crop up as a top claim driver, right alongside ransomware, because the potential for class-action lawsuits is massive when data collection practices go awry.

How do chatbots stand out from other technologies like pixel tracking or behavioral analytics when it comes to the risks they pose?

Chatbots differ because they’re inherently conversational and interactive, which builds a false sense of trust. Users often treat them like a human assistant, sharing personal details they wouldn’t input into a form or knowingly allow to be tracked via pixels. While pixel tracking or analytics quietly monitor behavior, chatbots actively solicit input—sometimes sensitive data like health or financial information—making the privacy stakes higher. The legal theories targeting chatbots mirror those used against tracking tools, but the directness of data capture in chatbots often leads to stronger claims of unauthorized collection or lack of consent.

What privacy laws are currently putting chatbots under the microscope, and why do they matter so much?

Chatbots are being heavily scrutinized under several key frameworks. In the EU, the GDPR sets stringent rules on data collection, requiring explicit consent and transparency about how data is used—something many chatbot deployments fail to address. In the U.S., the California Consumer Privacy Act, or CCPA, gives users rights over their personal data, including knowing what’s collected and opting out, which poses challenges for chatbots that log conversations by default. Then there’s Illinois’ Biometric Information Privacy Act, or BIPA, which comes into play if chatbots process biometric data like voice recordings without proper disclosure. These laws matter because they carry hefty penalties—BIPA, for instance, allows damages per violation, which can add up fast in class actions.

Why is the issue of consent so central to the legal risks surrounding chatbot data collection?

Consent is the linchpin because it determines whether data collection is lawful. Without clear, informed consent, companies are essentially operating in a legal gray area, inviting lawsuits and fines. The problem is, many chatbot interfaces don’t explicitly ask for permission before saving chats or sharing data with third parties. Insurers push for opt-in consent—where users actively agree—over opt-out, because it’s a stronger defense against claims of wrongful collection. If consent isn’t obtained or isn’t specific about data usage, companies can face statutory penalties, especially under strict laws like GDPR or BIPA.

How are chatbots in industries like healthcare and retail becoming focal points for litigation?

In healthcare and retail, chatbots often handle incredibly sensitive data. Think about a patient discussing symptoms or a shopper entering payment details—these interactions involve personal health information or financial records that users assume are private. Litigation arises because users frequently don’t realize their data might be stored, analyzed, or shared with third parties for purposes beyond the immediate conversation. The lack of clarity in disclosures fuels lawsuits, as plaintiffs argue they were misled about how their information was handled, drawing parallels to earlier privacy cases over data collection at checkout counters.

What liability risks do website owners face when they use third-party AI tools or chatbots on their platforms?

Website owners can be on the hook even if they’re not directly collecting data, because they’re ultimately responsible for what happens on their platform. If a third-party chatbot captures user data without proper disclosure, the website owner could be accused of facilitating unauthorized interception under wiretapping statutes. This mirrors litigation trends in online tracking, where complicity in data collection practices leads to lawsuits. Clear disclosures to visitors about the use of AI tools are critical—without them, owners risk being seen as negligent or complicit in privacy violations.

Can you shed light on the emerging legal risks tied to using chatbot data for training AI models?

Using chatbot data for AI training is a minefield right now. When customer chats or proprietary information are fed into training models without consent, it opens up dual risks: privacy violations and intellectual property disputes. If copyrighted material or pirated content is used without permission, companies can face massive claims—there was a recent $1.5 billion settlement highlighting this exact issue. The core problem is whether data was obtained legally and if users were informed about its downstream use. Without transparency and proper channels, organizations could be hit with both privacy lawsuits and copyright infringement claims.

What’s your forecast for the future of chatbot-related cyber liability risks and litigation?

I expect chatbot-related risks to escalate over the next few years as AI becomes even more embedded in customer interactions. We’re likely to see a wave of class-action lawsuits, especially as regulators and courts clarify how existing privacy laws apply to these technologies. Underwriters will tighten their standards, demanding detailed policies on data handling and vendor oversight. On the flip side, I think we’ll also see innovation in consent mechanisms and transparency tools as companies try to stay ahead of litigation. Ultimately, the balance between leveraging AI for efficiency and protecting user privacy will define the next battleground in cyber liability.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later