How to tackle the rising threat of shadow AI
By Philippa Cogswell, Vice President & Managing Partner, Unit 42, Asia Pacific and Japan, Palo Alto Networks
Friday, 15 August, 2025
It took around 23 years for the internet to accumulate a billion users, and mobile technology about 16 years to attain the same number. For generative artificial intelligence (GenAI), it’s going to be closer to seven years, according to a report by Palo Alto Networks titled ‘Unit 42 Threat Frontier: Prepare for Emerging AI Risks’.
GenAI is being adopted at a faster rate than any previous enterprise technology. This leaves technology teams facing a fresh wave of shadow IT, where employees use software or devices without the proper approvals. However, with the rise of AI, workers are now experimenting with or deploying AI models without formal approval, creating vulnerabilities and leading to the rise of a new threat: shadow AI.
The number and range of AI platforms that have sprung up in the past few years alone means that anyone with an internet connection can now access dozens of increasingly powerful AI models. Not that long ago, such AI models were only available to large organisations with deep pockets.
It should come as little surprise that AI platforms have proliferated so widely. We currently have a perfect storm of hardware and software development combining to create the perfect incubation chamber for powerful AI. This is particularly noticeable in the area of GenAI.
Between GenAI’s ease of use and startlingly sophisticated output, it has fast become a popular tool for an array of use cases, including copilots to assist software developers with coding tasks, voice assistants, chatbots and predictive analysis.
Security and shadow AI
With the accessibility and rapid uptake of AI, businesses are walking into a new era of shadow AI. Workers are increasingly employing AI platforms to perform a range of tasks to augment their workflows and get more done, whether or not they’re sanctioned by their organisations.
The rise of shadow AI gives rise to issues of security in corporate settings. Many GenAI platforms present challenges related to data leakage, integrity and governance. Publicly available platforms, while useful, can expose data, not only risking private corporate information, but also raising broader concerns about control over sensitive or business-critical data.
For context, some 40% of cloud security incidents in 2024 stemmed from unmonitored cloud assets and shadow IT. According to another report by Palo Alto Networks, ‘2025 Unit 42 Incident Response Report’, unmonitored AI platforms have the potential to lead to similar security incidents.
Good governance
To prepare for and tackle such challenges, cybersecurity teams and IT departments would do well to look to data governance as a sensible first step, with the creation and dissemination of rules of engagement that the organisation must follow for using any and all AI tools. Organisations must also consider preparing for upcoming AI-specific legislation, including how it governs both the use of AI tools and the underlying technologies themselves.
Securing AI is all about securing both company data and the technologies themselves. If people in an organisation are using external AI tooling, businesses can still secure against unauthorised use of AI applications. Third-party services and tools are readily available to assist in this area.
The introduction of strong data governance can also help here. For instance, businesses can enforce least-privilege access rules to certain categories or tiers of corporate data within the organisation. Such rules can be customised depending on the context of the existing data security requirements or the applications involved.
Design security
Technology leaders can also adopt a ‘secure-AI-by-design’ approach from the outset to help minimise risks in the face of shadow AI. This approach involves monitoring external AI usage, securing the AI application development lifecycle and controlling the data pathways, among other tactics.
For instance, the identity and access control policies that apply to other business applications should also apply to AI systems. If a business is running internal-only AI models for their applications, they shouldn’t just rely on the internal network to control access to them. Even if the AI is on the internal network, it’s best to establish identity-based access control.
Again, controlling information can be a key part of a secure-AI-by-design approach, even if the shadow AI tools themselves can’t be secured. Establishing role-based privileges, especially around training data, can bring an element of control for organisations, helping to minimise risks to data security, privacy and integrity.
Securing AI with AI
AI isn’t only at the heart of the platforms that are powering the rise of shadow AI, it’s also at the heart of some of the best approaches to mitigate the risks of shadow AI. In this case, AI-powered cybersecurity platforms can assist in creating a more secure AI ecosystem.
For example, platforms that have AI-powered discovery and classification capabilities can transform how businesses discover, manage and protect their data assets, enhancing data control and minimising the risks of sensitive data being plugged into unsanctioned AI platforms.
In addition to AI data protection, AI-embedded security solutions can also be used at the platform level to scan across an organisation’s AI supply chain, monitor and filter malicious prompts and enforce least privileged access, maintaining strict governance measures.
GenAI can sometimes be viewed as a double-edged sword for organisations, but with an effective mix of strong data governance, a secure-AI-by-design approach and the adoption of AI-powered defences, organisations can sharpen their AI usage while minimising risks.
Coding a clean slate: how AI is writing off tech debt
Tech debt has long been seen as a barrier to innovation, but with AI-powered approaches,...
The hidden legal risk in your AI workflow
Australia's new privacy tort just made your AI workflow a legal liability.
Secure by Design: vital in an evolving threat landscape
By embedding security into the fabric of development, Australian business can not only comply...