Ninety-nine p.c will not be a statistic you count on to see in a safety report. However that’s the finding from a new survey of 500 U.S. CISOs: 99.4% of organizations skilled no less than one safety incident tied to their SaaS or AI ecosystem in 2025. Solely three respondents reported zero incidents. Three.
The survey, carried out by Consensuswide, coated corporations starting from 500 to 10,000 staff throughout all main {industry} verticals. It requested 17 questions on safety posture, tooling, incidents, and preparedness. These organizations have been working a median of 13 devoted safety instruments every when these incidents occurred. Monetary providers corporations, probably the most security-invested sector within the survey, averaged 15.6 instruments—and nonetheless skilled SaaS provide chain assaults at 26% above the cross-industry charge.
The Risk Has Moved
I had a chance to speak with Amir Khayat, co-founder and CEO of Vorlon, about what the info reveals. His clarification begins with how enterprise workflows have basically modified—and why safety monitoring hasn’t stored up.
Conventional SaaS automation is deterministic—if this, then that. It breaks the second a variable modifications. AI brokers work otherwise. They use giant language fashions to interpret intent, deal with edge circumstances on the fly, and choose instruments and APIs primarily based on real-time targets relatively than hard-coded paths. That creates a monitoring drawback that safety instruments weren’t designed for.
“When habits is deterministic, you possibly can outline regular and alert on deviation,” Khayat mentioned. “When an agent is reasoning its manner via a workflow, establishing a behavioral baseline turns into a basically completely different drawback.”
Most enterprise safety structure was constructed round what Khayat calls the entrance door: person logins, credential validation, permission audits, and community perimeter controls. That coated two distinct entrances—human customers coming via browsers, and service-to-service APIs on the infrastructure stage. Instruments like CASBs, WAFs, and cloud safety posture administration have been constructed for these patterns. The habits was predictable sufficient to outline regular and detect deviation.
The engine room is a special scenario totally. An AI agent resolving a routine IT ticket may autonomously contact id techniques, permissions, and configurations throughout Okta, Slack, GitHub, DocuSign, and payroll platforms—all in minutes, with no human concerned. Every system logs its personal slice. No person sees the total image. The agent isn’t following a recognized sample as a result of it’s deciding the sample because it goes. That doesn’t appear to be a suspicious login. It doesn’t set off a configuration alert.
Asking the Incorrect Questions
The instruments most enterprises are working have been constructed to reply particular questions: what are the configurations, who has what permissions, is something misconfigured? These are helpful questions. They’re simply not the precise questions when an AI agent is shifting knowledge via a reputable OAuth-authorized integration.
The questions that matter in that situation are: what is that this agent really doing, what knowledge is it touching, and is that habits in keeping with what it was approved to do. As Khayat put it: “You may have 15 of them working and nonetheless be blind to that exercise.”
When CISOs have been requested to charge their instruments throughout 11 particular functionality limitations, between 83% and 87% of organizations reported some stage of limitation on each single one. The vary spans solely 4 proportion factors throughout all 11. That’s not proof that some distributors are outperforming others—it’s proof that your complete class was constructed across the similar assumptions, and people assumptions don’t maintain for the agentic layer.
Confidence Versus What Truly Occurred
Almost 90% of CISOs surveyed claimed sturdy or complete OAuth token governance. However 27.4% have been breached via compromised OAuth tokens or API keys that very same yr. About 79% claimed complete, real-time knowledge move mapping throughout SaaS and AI. However 86.8% mentioned they will’t really see what knowledge AI instruments are exchanging with SaaS functions. These numbers can’t concurrently be true.
Khayat traces that again to the distinction between configuration-layer governance and runtime governance. Most organizations know which tokens exist, can audit permissions, and may revoke tokens manually. What they don’t have is visibility into whether or not lively tokens are getting used constantly with their meant scope, or whether or not a token’s habits has drifted. Figuring out a token exists isn’t the identical as understanding what it’s doing proper now.
ITDR platforms that monitor non-human id exercise run into the identical wall—they usually cease on the authentication layer. They’ll let you know an agent is logged in. What they will’t let you know is what that agent did with knowledge as soon as it was inside: what it queried, what it moved, the place it despatched it, and whether or not any of that was inside scope. 83.4% of CISOs mentioned distinguishing between human and non-human habits is a present limitation of their instruments. That quantity ought to be a part of each dialog about enterprise AI safety proper now.
Extra Funds, Similar Structure
Greater than 86% of organizations plan to extend SaaS safety spending in 2026. 84% plan to extend AI safety spending. Funds directed on the similar instrument classes will produce the identical outcomes. The 99.4% breach charge occurred at 13 instruments on common. Including a 14th instrument that screens the entrance door received’t change something within the engine room.
Khayat’s argument is that the layer itself wants to vary—from configuration auditing to runtime monitoring. Behavioral baselines constructed round knowledge interplay relatively than login patterns. Actual-time token governance tied to precise utilization, not simply stock. And the flexibility to reconstruct a forensic timeline of agent exercise throughout each related system after one thing goes unsuitable. When a provide chain assault executes via a SaaS integration, the blast radius extends to each system that the token was approved to entry. With out that reconstruction functionality, scoping remediation and assembly regulatory disclosure timelines get more durable than they need to be.
