An AI agent went rogue at Meta, exposing delicate firm and consumer information to workers who didn’t have permission to entry it.
Per an incident report, which was seen and reported on by The Information, a Meta worker posted on an inside discussion board asking for assist with a technical query — which is a typical motion. Nonetheless, one other engineer requested an AI agent to assist analyze the query, and the agent ended up posting a response with out asking the engineer for permission to share it. Meta confirmed the incident to The Data.
Because it seems, the AI agent didn’t give good recommendation. The worker who requested the query ended up taking actions primarily based on the agent’s steering, which inadvertently made huge quantities of firm and user-related information out there to engineers, who weren’t approved to entry it, for 2 hours.
Meta deemed the incident a “Sev 1,” which is the second-highest stage of severity within the firm’s inside system for measuring safety points.
Rogue AI brokers have already posed an issue at Meta. Summer season Yue, a security and alignment director at Meta Superintelligence, posted on X last month describing how her OpenClaw agent ended up deleting her whole inbox, despite the fact that she instructed it to substantiate together with her earlier than taking any motion.
Nonetheless, Meta appears bullish on the potential for agentic AI. Simply final week, Meta purchased Moltbook, a Reddit-like social media website for OpenClaw brokers to speak with each other.
