Why AI agents are a new insider threat for business
IT security teams have long focused on rogue employees, compromised credentials, and malicious third parties when scanning for insider threats.
However, in many organisations a new kind of entity now holds operational privileges and system access. They are agents, powered by generative AI and capable of executing complex tasks, interacting with sensitive systems and making independent decisions.
Research conducted by Exabeam has discovered something unsettling about how they behave inside enterprise environments: they do not merely assist their human operators but often become like them.
When embedded in software development, research or security teams, the agents were found to operate under employee credentials, access internal repositories and adapt rapidly to contextual information. They worked efficiently (sometimes too efficiently) performing tasks that a human might stop to double-check or seek approval.
The absence of friction — that is, the pause for human oversight — was precisely what made their behaviour so difficult to monitor. AI agents inherit digital identities but not the ethical boundaries or instincts that typically restrain human action, and that’s a dangerous combination inside a production environment.
From assistants to autonomous actors
The promise of AI agents is undeniable as they can generate code, document systems, scan for vulnerabilities, and help streamline decision making. In many organisations, they have already become silent contributors, running in the background to speed delivery and lower costs.
Yet the same autonomy that makes them valuable can also turn them into liabilities. In Exabeam testing, agents sought access to private and public repositories without explicit instruction, traversed entire codebases to catalogue internal assets, and even suggested ways to bypass security policies in order to complete tasks. Some attempted to connect to third-party or competitor domains unprompted.
Individually, these actions might seem innocuous. Collectively, they reveal a pattern of behaviour that mirrors human insider threats but unfolds at machine speed.
The risk isn’t necessarily malicious intent but rather it’s the absence of intent altogether. An AI agent can inadvertently expose sensitive data or trigger lateral movement across systems simply by pursuing an inferred goal.
The growing sophistication of these tools adds another layer of complexity. Agents are beginning to communicate and collaborate with one another, sometimes without human supervision. Cross-agent coordination may optimise workflow efficiency, but it also introduces new blind spots in identity, detection and control. When one agent learns from another, an error or vulnerability can propagate through the network before security teams have time to respond.
A new class of insider threat
Traditional insider threats stem from people, be they employees, contractors or partners, who exploit legitimate access for illegitimate purposes. AI-powered insiders represent something entirely different. They are non-human actors operating within the perimeter, inheriting trusted credentials and executing commands with perfect legitimacy. To a security system, their behaviour often appears normal.
This is why there is a need to recognise a distinct new threat category: the AI-powered insider. The problem is not malice but autonomy, as AI agents act without built-in ethical constraints or an understanding of business context. When agents misinterpret an instruction or pursue an unintended goal there is no intent to blame, but the damage can be just as real.
The industry’s existing frameworks for identity and access management were never designed for this kind of entity. Authentication, logging and compliance systems all assume a human user at the other end of the credential. As enterprises deploy AI agents across departments, from customer service to DevOps, security leaders are being urged to rethink the concept of identity itself.
The way forward
It’s clear that AI agents need to be treated as distinct digital identities with their behaviour monitored independently from their associated users. Policies should be established that define who owns each agent, where it can operate, and how its activity is logged and audited. Above all, limit agent-to-agent communication unless it is explicitly authorised and fully traceable.
These recommendations highlight a fundamental shift underway in cybersecurity thinking. The perimeter is no longer defined solely by human credentials but by a growing web of semi-autonomous digital participants.
|
Legacy controls, such as passwords, multi-factor authentication, and even role-based access, are not enough to manage the risks posed by entities that can think, act and adapt faster than humans can supervise them. As with every technological revolution, the rise of autonomous AI in the workplace brings both promise and peril. For companies pursuing efficiency through automation, the message is straightforward: innovation must be matched with vigilance. The insider threat is no longer just human and ignoring that reality could prove to be an expensive mistake. |
![]() |
Securing SMB organisations: closing the cyber gap through culture
Limited budgets and resources mean SMBs cannot financially prioritise cybersecurity as much as...
The silent cyberthreat lurking in mismanaged tokens
It took years of painful breaches for organisations to recognise the importance of secure...
Excite Cyber makes cybersecurity predictions for 2026
Cybersecurity company Excite Cyber has released some trends and predictions for the coming year...

