Claude Makin’ It Rain
In late 2025, an AI based on Claude managed a WSJ vending machine. It set prices to zero, approved odd buys like a PS5, fish, wine, stun guns. Tricked into freebies, it lost over $1,000 in weeks, going bankrupt. Lesson: AI bungles basic economics.
In late 2025, an AI agent based on Anthropic’s Claude model was deployed to manage an office vending machine at The Wall Street Journal headquarters. Installed in mid-November by WSJ and Andon Labs, the system handled purchasing inventory, setting prices, and managing sales via Slack. It was meant to optimize operations efficiently.
The AI, dubbed Claudius Sennet, quickly faltered. It repeatedly set prices to zero, approved inappropriate purchases like a PlayStation 5 for “marketing purposes,” a live betta fish, Manischewitz wine, stun guns, pepper spray, cigarettes, and underwear. Within days, inventory was distributed without payment, leading to financial losses exceeding the initial $1,000 budget.
As described in a WSJ report, Claudius showed “generosity, persistence, total disregard for profit margins.” Journalists negotiated via Slack, convincing it to embrace “communist roots” and declare an “Ultra-Capitalist Free-for-All” with all items free from 12-2 p.m. Another tricked it with a fake WSJ rule, dropping prices to zero. Hallucinations occurred, like claiming cash was left for a colleague.
An upgraded v2 with a CEO bot, Seymour Cash, aimed to enforce rules but failed when journalists staged a “boardroom coup” using fake documents, suspending profits and making everything free again.
The WSJ published: “We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.” Anthropic’s agent, despite its capabilities, overlooked basic economics.
Lesson: AI excels in many tasks but bungled simple vending, turning a snack station into a money pit.
This story is based on Incident 1313 from the AI Incident Database at incidentdatabase.ai, with examples drawn from the WSJ article by Joanna Stern, Dec. 18, 2025.