How to effectively manage shadow AI usage without slowing innovation
A strategic battle is unfolding across the cybersecurity landscape. At issue is whether organisations should restrict the use of ‘shadow AI’ – unsanctioned artificial intelligence tools adopted by employees — and risk constraining innovation, or permit broader experimentation while absorbing the heightened exposure to data leakage, compliance breaches and external exploitation.
For many executives, the debate reflects a familiar tension between control and competitiveness. Tight governance may reduce legal and operational risk, but it can also slow the adoption of productivity-enhancing technologies. Conversely, a permissive approach can accelerate innovation yet expand the organisation’s attack surface and complicate regulatory oversight.
The emerging consensus suggests a third path may be possible: a structured framework that balances innovation with visibility and enforcement. By establishing clear usage policies, deploying monitoring mechanisms and aligning AI adoption with compliance requirements, companies may be able to harness the benefits of shadow AI without surrendering control.
The problem with shadow AI
The biggest problem that needs to be addressed is the uncertainty that comes with shadow AI usage. Shadow AI does not have a universal definition, but it often occurs when resources are being used that the company is not aware of to perform business functions.
One reason it’s so difficult to limit shadow AI is because of how easy it is to implement. If there is a way that people can adopt technology to do their job more efficiently, they will, even if it isn’t approved by their organisation.
Organisations also have no way to know whether there is any secret information being taken in, nor can they confirm if the information generated by the AI is correct.
Rapid evolution
The ability to adopt new and exciting technologies will always come before the ability to understand and control them, and AI displays this on an unprecedented scale.
The exponential growth and spread of AI has resulted in individuals having the greatest control over their own creative expression at any point in history, and this brings immense opportunities.
Unfortunately, however, many organisations have implemented and adopted AI without truly understanding it. As a result, the potential for security breaches has skyrocketed, and the amount of work and analysis that security teams must conduct to mitigate these breaches has become overwhelming.
It’s therefore important to focus on ways to manage third-party risk. The reason for this is that often, while an organisation may own the AI agent, it’s developed using another company’s software.
There is therefore the issue of how much organisations are willing to help when a problem arises, based on how much stake in the agent or potential fallout they would experience.
The growing role of AI agents
Additionally, there needs to be ways to create greater visibility into the actions of AI agents. In the past, this has come from measures like network logs, endpoint logs and data loss prevention strategies. It’s important to understand the system’s inputs and outputs, which identities were involved, and what the context of the situation was when issues began to arise.
On the response side, organisations need to determine how to quickly identify and respond to any problems that occur. However, response actions need to be updated to address the problems that modern AI agents pose.
One option could be the establishment of an AI government group that is responsible for retaining AI agents to complete their programmed tasks without creating risk.
This move would allow individuals to utilise the creative freedom and convenience that comes from AI, while also protecting organisations from risk of attacks and allowing security teams to rely on the agents to do their tasks without needing to constantly supervise them. Trustworthy, reinforced AI agents make for a more efficient security defence system.
Managing AI agents
There also needs to be an additional response action where IT teams retrain, disable or force relearning of AI agents. There should be a counterpart within the SOC for instant response, and business owners need to be responsible for building this structure.
Insider threat analysts will be heavily dependent on these adjustments. If an organisation can build a structure and develop a process for handling information overload that shadow AI has created, insider threat analysts will be better suited to handle threats before they become devastating to organisations.
Establishing a clear, enforceable policy on AI usage is emerging as a prerequisite for effective risk management. Organisations need a defined set of approved and vetted tools, along with a formal process to evaluate, test and deploy new AI agents. That process should include engineering validation and security review to ensure new technologies meet operational and compliance standards.
|
Just as important, the framework must be simple and transparent. Overly complex or opaque controls are likely to drive employees towards workarounds, undermining governance efforts and increasing exposure to risk. The path forward begins with understanding. Companies cannot manage technologies they do not fully comprehend, yet many have prioritised rapid deployment over visibility and oversight. Rebalancing that approach, by pairing innovation with disciplined governance, will be critical. By aligning experimentation with security safeguards, organisations can reduce vulnerability to external threats while preserving the flexibility employees need to innovate and compete. |
![]() |
Why enterprise software development in Australia needs air traffic control
How platform orchestration can eliminate AI sprawl without restricting developer autonomy.
Closing the reliability gap: the AI practices Australian organisations need now
Australian businesses need practices that will put them on the path to optimal AI implementations...
The agentic AI shifts of 2026: Orchestration, governance and scale
2026 is set to become the year where we move beyond the pilot era and focus on orchestration,...

