Guardian agents will suppport trustworthy AI: Gartner
By 2030, guardian agent technologies will account for at least 10 to 15% of agentic AI markets, according to Gartner.
Guardian agents are AI-based technologies designed to support trustworthy and secure interactions with AI. They function as both AI assistants, supporting users with tasks like content review, monitoring and analysis, and as evolving semi-autonomous or fully autonomous agents, capable of formulating and executing action plans as well as redirecting or blocking actions to align with predefined agent goals.
Guardrails needed as agentic AI use continues to grow
According to a May 2025 Gartner webinar poll of 147 CIOs and IT function leaders, 24% had already deployed less than a dozen AI agents and another 4% had deployed over a dozen.
The same poll question found 50% of respondents were researching and experimenting with the technology, while another 17% said they had not done so, but planned to deploy by the end of 2026. Automated trust, risk and security controls are needed to keep these agents aligned and safe, accelerating the need for and rise of guardian agents.
“Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails,” said Avivah Litan, VP Distinguished Analyst at Gartner. “Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision-making with risk management.”
Risks increase as agent power spreads
52% of webinar poll respondents identified that their AI agents are currently or have the intention in future of primarily focusing on use cases related to internal administration functions such as IT, HR and accounting, while 23% are focused on external customer-facing functions.
As use cases for AI agents continue to grow, there are several threat categories impacting them, including input manipulation and data poisoning, where agents rely on manipulated or misinterpreted data. Examples include:
- Credential hijacking and abuse leading to unauthorised control and data theft.
- Agents interacting with fake or criminal websites and sources that can result in poisoned actions.
- Agent deviation and unintended behaviour due to internal flaws or external triggers that can cause reputational damage and operational disruption.
“The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight,” Litan said. “As enterprises move towards complex multi-agent systems that communicate at breakneck speed, humans can’t keep up with the potential for errors and malicious activities. This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control and security for AI applications and agents.”
Gartner says organisations should focus on three primary use types of guardian agents to contribute towards safeguarding and protecting AI interactions:
- Reviewers: Identifying and reviewing AI-generated output and content for accuracy and acceptable use.
- Monitors: Observing and tracking AI and agentic actions for human or AI-based follow-up.
-
Protectors: Adjusting or blocking AI and agentic actions and permissions using automated actions during operations.
Guardian agents will manage interactions and anomalies no matter the usage type. This is a key pillar of their integration, since Gartner predicts that 70% of AI apps will use multi-agent systems by 2028.
Commvault acts to protect against new quantum threats
Commvault's new post-quantum cryptography capabilities aim to help protect businesses against...
Threat actors using aggressive new extortion tactics: report
The latest extortion and ransomware report from Palo Alto Networks reveals aggressive new tactics...
Tenable adds third-party connectors to Tenable One
Tenable has introduced a range of third-party data connectors to its Tenable One...