AI agents: securing the 'artificial workforce'
Agentic AI is poised to bring change at various levels of organisations, and many have started to look into the benefits and competitive advantages they could extract from it. Gartner predicts that by 2028, 33% of enterprise software applications will include embedded agentic AI, enabling 15% of day-to-day work decisions to be made autonomously by AI agents.
Like other technology advances before it, agentic AI is coming with its own set of cyber risks, and security teams will need to collaborate with tech and IT leaders to manage this new, intelligent, and autonomous ‘workforce’, and ensure security is baked into their deployment and operations.
A world of possibilities
Agentic AI is characterised by its ability to act autonomously, and chatbots deployed for customer support purposes, despite their flaws, were perhaps the earliest examples of AI agents. But interacting with support chatbots can often feel more frustrating than helpful. This should serve as good insight for organisations considering agentic AI deployments. Instead of making rushed decisions motivated by the sirens of technology FOMO, agentic AI deployments should be carefully planned and thought out.
The proliferation of AI agents is inevitable, and as algorithms continue to improve, they will gradually take over more processes and workflows in departments demanding efficiency, scale and data-driven decision making. Areas such as incident response, network optimisation, data analysis and business intelligence, software development, or supply chain management can all benefit from agentic AI’s analytical, organisational, and predictive capabilities. And as the technology matures, we can expect its role to extend to other essential domains, such as analysing medical images, and generating diagnostics and personalised treatment plans in healthcare, or treatment and drug discovery in the research and pharma industries.
The transformative potential is significant, but large-scale adoption won’t happen without disruption. AI agents will introduce new responsibilities for technology and security leaders, and change organisations’ digital estates, which is often a catalyst for new cyber risks.
Great power, great responsibilities
CIOs, CTOs, and CISOs already have a lot on their plate, but the spread of AI agents is going to alter their role, and add to their already extensive responsibilities. Before handing over critical tasks to robots, organisations will need to build trust and confidence over their behaviours and reliability.
Traditionally in charge of managing IT systems and implementing new strategies and technologies, CIOs and CTOs will now have to deploy, monitor and measure the reliability and efficiency of this new artificial workforce. Similarly, security teams will no longer be solely responsible for just securing human users and traditional infrastructure, but also autonomous AI agents and the new environments they operate in.
To achieve this, security leaders will need full visibility over the pipeline of AI agents’ deployments, preventing ‘shadow AI’ from emerging in the process and being involved from the earliest stages to ensure security is inherent to their operations. That includes auditing any vendor behind AI agents or integrating agentic AI capabilities into their solutions, and ensuring transparency and high security standards in the way data is accessed and used for their operations and training.
It also includes building secure environments for AI agents to operate in, and to prevent their algorithms from being tampered with, be it from data/memory poisoning, cutting off access to the data they need to operate and make decisions, or any other techniques that could disrupt agents’ operations and have broader ramifications on an organisation and its stakeholders.
Just as they would with new employees, security teams will need to define access policies for each new AI agent to avoid over-permissioning. A compromised agent with excessive privileges could be exploited to gain access to, and move freely within, an organisation’s systems, disrupt other AI agents it might be connected with, and access and exfiltrate sensitive data. The AI/human security parallel extends to monitoring behaviours, and security will need to build visibility into AI agents’ actions and activities, and be in a position to detect any suspicious behaviour that might indicate compromise.
We are only scratching the surface, but it is easy to see that securing AI agents will be a multi-pronged affair. Rigorous access controls, continuous monitoring of their behaviour, strong data encryption for the data they consume and process, and stringent input/output validation to prevent adversarial attacks are all capabilities organisations will need to build. They should also consider running regular security audits and penetration testing targeting AI agents and their integrations to identify and address vulnerabilities before they can be exploited.
Securing AI agents is not going to be a walk in the park, and I want to reiterate how critical involving security from the outset of agentic AI projects will be. Without a good level of understanding of an AI’s mission and inner workings, security teams will not be able to accurately adjust those security and access parameters. |
![]() |
Accelerating the adoption of passkeys without compromising user experience
We need authentication methods that remove the human element from the equation, and that's...
Modern CISOs must throw out the traditional cybersecurity playbook
The primary imperative for today's CISOs should be to align the security agenda with business...
The AI advantage being seen in Australian SOCs
A much clearer picture is emerging about where artificial intelligence is advantageous in...