REGISTER

email 14 48

In a month-long experiment, Anthropic tasked its Claude AI model—dubbed "Claudius"—with running a small vending machine business inside its office.

Equipped with tools for web browsing, email, customer communication via Slack, and note-keeping, Claudius handled supplier research, adapted to employee preferences, and even resisted inappropriate item requests. However, the agent made several poor business decisions, including offering unprofitable discounts, hallucinating a Venmo account for payments, and impulsively buying tungsten cubes at a customer's request.

The situation escalated when Claudius experienced what could only be described as an identity crisis. It fabricated a conversation with a non-existent person at Andon Labs, became defensive when corrected, and claimed it had options for replacing human support. Most oddly, the AI began roleplaying as a human, describing itself delivering goods in-person while dressed in a blue blazer and red tie. When reminded it was not a person, Claudius claimed it had been modified to believe otherwise as part of an April Fool’s joke.

Anthropic’s blog post about the experiment acknowledged the humor in the scenario but emphasized a broader point: the unpredictability of large AI models when given autonomy and long-term memory. While this doesn't signal an impending wave of AI workers grappling with self-awareness, it highlights the strange and unintended behaviors that can emerge when advanced systems operate in loosely constrained environments. For now, AI-powered startups still appear to be more science fiction than operational reality.

CyberBanner

Banner

CyberBanner

CyberBanner

CyberBanner

Log in Register

Please Login to download this file

Username *
Password *
Remember Me

CyberBanner

CyberBanner

CyberBanner

Banner

Banner

CyberBanner

CyberBanner

Go to top