The Hidden Costs of AI Failures
Someone always pays
A few weeks ago, Amazon’s AI coding assistant deleted a production environment for AWS. The outage lasted 13 hours, a catastrophic failure. An engineer gave the bot broader permissions than intended (whom amongst us hasn’t given work to AI we shouldn’t) and presto, mucho money disappeared. What happened next was indicative of where this is all going. Amazon blamed the system admin. The person who configured the deployment, often an engineer several layers removed from the business decision, became the de facto fall guy.
Which means it is worth asking: who is liable when an AI model does something bad? Who pays that cost?
Everyone has internalized the “AI is getting cheaper” narrative and is building their strategy around it. But the mistake is AI is not getting cheaper, AI tokens are getting cheaper. That is something materially different. The actual cost of deploying AI includes a bunch of line items that don’t show up in pitch decks: insurance (if you can even get it), monitoring, certification, human oversight, error correction, legal exposure, and the reputational cost of being the person who championed the thing that blew up.
For 70 years, software was deterministic. Same input, same output. Everything from our pricing with subscriptions, to our GTM depended on that fact. AI is probabilistic. And the cost of that change—call it the non-determinism tax—is real and almost completely unpriced.
The data is piling up fast:
According to EY Global’s Responsible AI survey, 64% of companies with annual revenue over $1 billion have experienced AI-related losses exceeding $1 million, with an average of $4.4 million per company.
47% of CISOs have observed AI agents exhibiting unintended or unauthorized behavior.
There are dozens of AI-related lawsuits currently pending in the US across copyright, employment discrimination, and liability categories, with the volume growing 137% year over year.
If you’re building an AI company, investing in one, or just using AI tools at work, you’re paying the non-determinism tax whether you realize it or not.
The Blame Chain
Part of why this cost stays hidden is that nobody in the AI stack wants to own it. And the failures are showing up at every layer.
Internally, a rogue AI agent at Meta exposed proprietary code and user-related data for two hours in March—a “Sev 1” incident, the second-highest severity level in Meta’s system. Meta’s response was to tighten IAM controls. Translation: the agent that went rogue isn’t liable, the identity governance team that didn’t anticipate the failure mode is. With customer-facing issues it’s the same pattern. An IBM case in March saw an autonomous customer-service agent approving refunds outside policy guidelines. IBM didn’t blame the model. It blamed the deployment configuration. Meanwhile, researchers are tracking more and more AI-written code in the wild that is vulnerable to hackers—vulnerabilities nobody signed off on because nobody reviewed the code the model wrote.
Each of these failures demonstrates the same structural problem. When an AI agent makes a bad autonomous decision, the foundation model provider—OpenAI, Anthropic, Google—disclaims liability. Model provided as-is, no warranties, not responsible for outputs. The application layer has lawyered its way out too. TermScout data published through Stanford Law’s CodeX program found that 88% of AI vendors cap liability to monthly subscription fees. Only 17% provide regulatory compliance warranties.
Which leaves the enterprise holding the bag. And courts are moving in this direction fast. The Air Canada ruling established that companies are responsible for what their AI agents say and do. The Mobley v. Workday case is testing the limits: Workday’s screening tool allegedly discriminated against applicants over 40, rejecting the lead plaintiff from over 100 jobs within minutes. In May 2025, a federal court granted preliminary collective certification, allowing the case to proceed as a nationwide class action.
Every link in the chain has a plausible defense. None of them have the risk. It just pools, unpriced, on the books of whoever happens to be deploying the thing.
The average AI deployment carries an implicit cost that’s several multiples of the sticker price of the model itself. Below: how to calculate your non-determinism tax, why regulation won’t fix this, and what the companies building the AI risk-pricing layer are betting on.
Keep reading with a 7-day free trial
Subscribe to The Leverage to keep reading this post and get 7 days of free access to the full post archives.



