Who Is Liable When Artificial Intelligence Errs?

Who Is Liable When Artificial Intelligence Errs?

The elaborate deepfake video that tricked an employee at a multinational firm into transferring millions of dollars was not a scene from a futuristic thriller; it was a real-world financial heist that exposed the profound new liabilities businesses now face. As artificial intelligence integrates into every facet of commerce, from customer service chatbots to complex financial modeling, it introduces an uncharted territory of risk. This technological leap has left the insurance industry in a state of transition, struggling to adapt its traditional frameworks to a class of technology whose capacity for error, deception, and widespread disruption defies easy categorization. The core challenge lies in assigning responsibility when these sophisticated systems go wrong, creating a complex web of financial, legal, and reputational exposure that insurers are now racing to understand, price, and cover. The answers will define the future of corporate accountability in an increasingly automated world.

The Emerging Landscape of AI-Driven Risk

High-Stakes Incidents Define a New Threat

The abstract concept of AI risk has rapidly materialized into tangible, high-stakes incidents that underscore the breadth of potential corporate liability. In one prominent case, an employee at the global design and engineering firm Arup was deceived by a sophisticated deepfake scam, transferring over $25 million after participating in a video call with what appeared to be the company’s chief financial officer and other executives. This event showcased a frightening evolution in social engineering, where AI-generated audio and video can convincingly mimic trusted individuals, bypassing human intuition and standard security protocols. The incident highlights a novel form of fraud where the attack vector is not a compromised network but the manipulation of human trust through advanced technology, presenting a scenario that traditional fraud or cyber insurance policies may not be equipped to address. Insurers are now grappling with how to underwrite against deception that is virtually indistinguishable from reality, a challenge that strikes at the heart of risk assessment and verification.

Beyond malicious attacks, operational errors by autonomous systems are creating their own distinct liability challenges, demonstrating that even well-intentioned AI can lead to significant financial and reputational harm. Air Canada found itself legally bound to honor a discount invented and offered by its own customer service chatbot, after a tribunal ruled that the company was responsible for all information on its website, whether generated by a human or an algorithm. Similarly, Google is facing a lawsuit after its AI Overviews feature incorrectly named a company in a legal matter, causing a client to cancel a substantial contract based on the erroneous information. These cases establish a critical precedent: organizations are being held accountable for the outputs of their AI systems, regardless of the underlying cause of the error. This blurs the lines between a simple software glitch, a misrepresentation of fact, and corporate negligence, forcing a re-evaluation of where product liability ends and professional errors and omissions begin.

A Fundamental Categorization Problem

The core challenge for the insurance industry is a fundamental “categorization conundrum,” as AI-related risks do not fit cleanly within the established silos of coverage. Thomas Bentz, a partner at Holland & Knight, aptly illustrates this dilemma by questioning whether an AI program causing bodily injury would be covered under a commercial general liability (CGL) policy, which typically covers physical damages, or a cyber policy, which is oriented toward digital risks. This ambiguity creates significant potential gaps in coverage, leaving businesses exposed to unforeseen liabilities. For instance, if an AI-powered diagnostic tool provides an incorrect medical analysis, is it a case of professional malpractice covered by a medical liability policy, a product defect covered by a product liability policy, or a data processing error that might fall under an errors and omissions (E&O) policy? This lack of clarity is a primary driver of the industry’s hesitation and confusion, as placing a risk in the wrong category can lead to denied claims and legal disputes.

This problem is severely compounded by the absence of historical data, the bedrock upon which the insurance industry builds its actuarial models. Insurers rely on decades of claims history to analyze trends, predict future losses, and develop accurately priced policies, endorsements, and exclusions. However, the widespread corporate adoption of generative AI is a very recent phenomenon, meaning there is no substantive claims history to draw from. This situation mirrors the early days of the cyber insurance market, which has taken more than a decade to mature into a stable enterprise solution. Without reliable data, insurers are operating in a reactive and experimental mode, unable to create standardized policies for AI risks. The result is a patchwork of potential coverage, often shoehorned into existing policy language, that lacks the precision and clarity policyholders need to confidently manage their exposure in this new technological landscape.

The Insurance Industry’s Cautious Adaptation

Developing Informal Underwriting Criteria

In the absence of standardized policies and historical data, insurers are cautiously beginning to probe a company’s AI posture through informal, evolving underwriting criteria. Panos Leledakis of the IFA Academy notes that underwriters are now examining key indicators of responsible AI implementation to gauge a company’s risk profile. Foremost among these is the existence of a formal AI governance framework. This includes having clear, documented policies on the acceptable use of AI tools, establishing an ethics committee or review board to oversee AI deployment, and maintaining transparent processes for how AI-driven decisions are made and validated. Insurers are also scrutinizing data handling protocols, particularly how a company protects sensitive corporate or customer information when it is used to train or interact with AI models. Robust access controls and data segregation are becoming critical signals to underwriters that an organization is taking a proactive approach to mitigating data leakage and privacy risks associated with this new technology.

These considerations, however, remain more “directional than formalized,” serving as influencing factors rather than decisive, industry-wide standards. For example, while the presence of a comprehensive employee training program on AI misuse is viewed favorably, its absence may not yet trigger an automatic denial of coverage. This training is increasingly vital as social engineering tactics, such as the deepfake fraud that targeted Arup, become more sophisticated. Underwriters recognize that human operators are often the last line of defense, and their ability to identify and question anomalous AI-generated content is a crucial risk mitigator. Insurers are not yet publicly labeling claims as “AI incidents,” instead classifying them under existing frameworks like cyber risk or professional liability. This cautious approach allows the industry to gather data slowly while avoiding the creation of rigid policy language for a risk landscape that is still in constant flux, though this ambiguity leaves policyholders in a state of uncertainty.

Encouraging Proactive Risk Mitigation

To navigate this transitional period, insurers are actively encouraging businesses to adopt a suite of best practices aimed at mitigating the most immediate and foreseeable AI-related risks. A key recommendation is the implementation of stronger verification protocols to counter the growing threat of deepfake-driven fraud. This includes mandating multi-factor authentication for sensitive transactions and, more importantly, establishing call-back procedures where financial transfers or data-sharing requests initiated via digital communication are independently verified through a pre-established, trusted channel. These procedural safeguards add crucial friction to processes that could otherwise be easily exploited by AI-generated deception. Similarly, insurers are strongly advocating for a “human-in-the-loop” model for any critical or high-stakes AI-assisted communications and decisions. This ensures that an algorithm’s output is subject to human review and final approval, creating a vital checkpoint for accountability and error correction before irreversible harm can occur.

Furthermore, a significant focus is being placed on data governance and system integrity. Insurers are advising organizations to strictly prohibit the use of public large-language models with sensitive corporate or customer data to prevent inadvertent leakage of proprietary information into public training sets. Instead, the push is toward using private, sandboxed AI environments where data remains under the company’s control. Alongside this, the enforcement of rigorous logging and auditing of all AI-generated outputs is becoming a critical expectation. Maintaining a detailed, immutable record of an AI’s activities and decisions is essential for post-incident forensic analysis, allowing investigators to trace the source of an error and determine liability. This period of heightened risk is expected to catalyze more formalized governance and accountability standards, with experts like Leledakis predicting that these best practices will likely evolve into firm policy requirements within the next one to two years.

Specific Threats and Evolving Coverage

The Rise of Chatbot and Wiretapping Litigation

While the insurance industry develops its long-term strategy, the immediate risks posed by specific AI applications continue to escalate, with customer-facing chatbots emerging as a significant source of liability. A report from the digital risks insurance company Coalition revealed that chatbots were implicated in 5% of all web privacy claims it analyzed. These lawsuits are frequently built upon “digital wiretapping” statutes, such as Florida’s Security of Communications Act, which were originally intended to prevent the unauthorized interception of telephone calls. Plaintiffs’ attorneys are successfully arguing that when a website uses a third-party chatbot provider, that provider is acting as an unannounced third party, illegally “listening in” on the private conversation between the user and the business without proper consent. This legal strategy has proven to be highly effective and repeatable, turning a common customer service tool into a potent source of class-action litigation.

The implications of this trend extend far beyond the direct financial costs of litigation and settlements. The widespread and often improperly disclosed use of these tools is creating a systemic risk for businesses across all sectors, forcing a re-evaluation of digital privacy and transparency. This wave of litigation is pressuring companies to provide much clearer disclosures about how they collect and process data through automated systems. For insurers, this has elevated the importance of scrutinizing a potential policyholder’s privacy policies and consent mechanisms during the underwriting process. A company’s failure to adequately inform users that their conversations are being recorded and analyzed by a third-party AI system is now seen as a significant liability, one that could lead to costly legal battles and reputational damage. The chatbot litigation trend serves as a powerful example of how old laws are being creatively applied to new technologies, creating unexpected risks for unprepared organizations.

Addressing the Deepfake Dilemma

The threat posed by deepfakes represents a particularly novel and challenging frontier for both businesses and their insurers, as the technology rapidly filters down from high-profile political targets to the general business community. Daniel Woods, a security researcher with Coalition, notes that deepfakes are poised to become a mass-market issue, subverting traditional cybersecurity paradigms. A business can invest heavily in protecting its internal networks, managing data consent practices, and training employees to spot phishing emails, but it has little defense against an attack that weaponizes its own publicly available marketing material. For example, a video of a CEO speaking at a conference can be manipulated to create a convincing deepfake that instructs employees to take fraudulent actions or makes defamatory statements that cause severe reputational harm. This type of external, reputation-based attack falls outside the scope of many conventional cybersecurity and liability policies.

In recognition of this significant coverage gap, some forward-thinking insurers have begun to proactively design new solutions. Coalition, for instance, has launched an endorsement to its cybersecurity policies that offers specific coverage for deepfake-related reputational harm in several global markets. This innovative coverage moves beyond simple financial reimbursement to include essential incident response services, such as forensic analysis to confirm the content is a fabrication, legal support for takedown notices, and crisis communications to manage public perception. While existing E&O coverage might be applicable in some instances, such as claims arising from incorrect information provided by a chatbot, the unique nature of a deepfake attack underscores the urgent need for tailored insurance products. This proactive development of specialized endorsements reflects a growing acknowledgment within the industry that AI is creating risks that require fundamentally new approaches to coverage and response.

Charting a Course in an Era of Algorithmic Accountability

The industry’s journey from initial confusion to proactive adaptation was marked by a fundamental shift in how risk was assessed in the context of artificial intelligence. Insurers came to understand that AI liability was not a single, monolithic category but a complex spectrum of exposures that touched everything from privacy law and intellectual property to corporate fraud and professional negligence. The organizations that successfully navigated this transformative period were those that moved beyond viewing AI as just another software tool and instead implemented robust governance frameworks that treated it as an integral, decision-making component of their operations. Their proactive measures in establishing clear lines of accountability, mandating meaningful human oversight for critical processes, and fostering a deep culture of AI literacy ultimately became the blueprint. These practices not only protected their own enterprises but also helped define the new standards for insurability and responsible innovation that shaped the future of coverage in the age of artificial intelligence.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later