Don't let AI shadows overtake the silver linings
AI applications and agents offer many advantages. Employees know this as many use AI in their day-to-day lives, with a recent study from the Melbourne Business School finding half of Australians use AI regularly. The issue is, a trend is emerging of workers using these tools in the workplace without seeking approval or checking company policies.
For IT teams, that’s a nightmare. It risks confidential and proprietary company information getting in the wrong hands which, in the age of AI, is far easier to do than people often anticipate.
Far from being an outlier, the use of shadow AI is more common than one might think. The latest Nutanix Enterprise Cloud Index (ECI) found 72% of Australian respondents encounter AI applications or agents being implemented by employees without approval or oversight.
Needless to say, business leaders view this as a major risk. Almost 92% of local leaders surveyed in the ECI believe unauthorised AI use introduces risk, including exposure of sensitive data and intellectual property.
It’s easy to understand why. Consider public large language models like ChatGPT, Claude or DeepSeek. Any confidential business information shared with these models becomes part of the AI’s training data. This puts it at risk of being shared with another user, anywhere in the world, who uses the right prompt.
Part of the solution to shadow AI is education. Employees don’t typically put their companies at risk and jeopardise intellectual property on purpose. Many are simply not aware of the impact a seemingly simple prompt in a public platform might have or what information they’re accidentally exposing to an unsanctioned AI tool.
Education is one side of the coin, but the deeper fix is foundational. We must move beyond general awareness and towards a rigid governance architecture that treats AI not as a novelty, but as a core component of critical infrastructure.
The shift to agentic AI
The urgency is heightened as we move from simple generative models to agentic AI.
Where a chatbot provides information, an agent takes action and makes decisions. This shift makes role-based access control (RBAC) a non-negotiable policy. Just as you wouldn’t give an intern unrestricted access to the company’s financial records, AI agents must be governed by the same granular permissions as human employees by limiting their ability to view, share or delete sensitive data based on their specific function.
Establishing governance ground zero
Reining in these agents and wrestling back control comes down to one word: governance.
First, data governance must be absolute. This involves strict controls on what company data is accessed, ensuring data sovereignty for local compliance, and vetting the external data being ingested to prevent contamination or legal risk.
Second, we must address tool and model governance. While employees can use any public tool for personal use, the enterprise requires a curated, sanctioned stack. Organisations must determine which models are safe for business use, moving away from public-facing applications that lack the necessary privacy safeguards.
|
Finally, for those in regulated environments — such as those governed by APRA — this governance must be auditable. In critical infrastructure, the ability to maintain compliance and legal frameworks is paramount. If your AI tools operate outside a regulated model, you aren’t just risking a data leak; you are risking a regulatory failure. AI is not on the horizon; it is here today. Organisations that don’t find a way to integrate it through a framework of RBAC, data sovereignty and curated tooling will find that their employees have already integrated it for them — in the shadows. |
![]() |
What ANZ CISOs must prioritise as AI reshapes security operations
AI will continue to expand the speed, complexity and scale of security operations, but the...
The cost of downtime in Australia demands disciplined preparation
The commercial exposure created by cyber downtime is now quantifiable and, in many cases,...
The SaaS-Shift is here: how enterprises should respond
Investors and enterprise users are reassessing which platforms will remain essential in an...

