2026 will be the year identity defines cyber defence
AI wasn’t new in 2025; it shifted from a novel tool to an integral part of day-to-day work. What began as trials quickly became normal working life, with AI built into the tools people use to write, analyse, create and solve problems, and even handle small tasks autonomously from start to finish. For organisations, AI has delivered real boosts in speed, creativity and efficiency, and it’s also raised the bar on what ‘good security’ needs to look like when AI is woven into how work gets done.
In Australia, incidents like the Qantas breach and the Salesloft/Drift supply-chain attack — where attackers used an AI chatbot’s trusted access across connected platforms to steal login tokens and move into customer environments — show how fast the threat landscape is moving alongside adoption.
Cases like Salesloft highlight the growing role of machine identities: the behind-the-scenes logins and access keys used by apps, automated tools and AI systems to do their jobs. As AI becomes more connected and is granted more access, these non-human identities become bigger targets, and protecting them becomes central to staying secure.
2026 looks set to be the year identity shapes the next phase of cyber defence. Below are four predictions on where identity-driven risk is heading next and what leaders should be ready for in the year ahead.
1. Shadow AI will be responsible for some of 2026’s biggest breaches
Some of the big cyber attacks of 2026 will be a result of ungoverned shadow AI use. Many CEOs are pushing their organisations to adopt AI to drive efficiency and innovation. As a result, nearly every department across organisations is adopting AI in some form, creating larger attack surfaces. Much of that AI use is endorsed but unregulated: 44% of organisations run into shadow AI issues at least monthly. AI governance is still lagging behind adoption rates, and threat actors are already taking advantage of that gap to gain access to organisations through unmanaged AI attack surfaces. These unmanaged tools often store or transmit API keys, service tokens, and privileged credentials, turning shadow AI into a direct identity security risk. More likely than not, some of the headline breaches of the year will stem from employees using unmanaged AI tools — even though they are approved tools and use cases — if AI agents aren’t properly secured.
2. Deepfakes get democratised
Once limited by technical barriers and computing power, generative AI tools for creating convincing deepfakes will become democratised. With the recent release of Sora 2 and other advanced tools, organisations will see a surge in deep-fake-enabled phishing, impersonation and disinformation attacks that are cheaper, faster and more convincing than ever before. Deepfakes will increasingly be able to bypass biometric and identity-proofing systems, requiring cryptographic or behavioural verification methods instead. Strong identity verification will be crucial for both private and public sectors.
3. AI will enable smaller nation states to be cyber superpowers
With AI making it easier to carry out sophisticated cyber attacks with fewer resources, we’ll see unexpected nation-state players become more prevalent in the geopolitical cyber landscape. Ukraine has already given us a blueprint for what this could look like: a smaller country with fewer physical resources inflicting significant damage on a larger country by targeting critical infrastructure completely remotely with advanced cyber warfare tactics. Smaller states and proxy groups may weaponise stolen credentials, synthetic personas and deepfakes for asymmetric influence and identity warfare. AI will enable easy access to advanced cyber attack skills for countries and organisations with far fewer resources, essentially levelling the playing field.
4. Enterprise security meets national security
As state-sponsored attacks escalate, the line between enterprise cybersecurity and national defence will continue to blur. Governments will recognise that identity and privilege control are fundamental to national resilience and will deepen public–private collaboration to strengthen machine identity governance, AI accountability and zero trust adoption across critical infrastructure. Cyber resilience will move beyond compliance to become a strategic imperative, driving investment in identity-centric security frameworks that deliver unified visibility, control and accountability across every identity, both human, machine and AI.
The bottom line for 2026
2025 gave us an initial look at what happens when AI scales faster than identity controls. In 2026, the winners will be the organisations that treat identity — human, machine and AI — as the core of cyber defence. Get that right, and AI becomes a force multiplier you can trust, not a risk you have to fear.
How to harness AI to advance cybersecurity
Organisations that prioritise AI-enabled security and a culture of continuous learning...
Solving the IoT attack surface challenge: a practical playbook for IT managers
As IoT environments get more complex, adopting zero-trust architectures to verify every device...
Defending against AI-powered cyberthreats
Improving cyber resilience is no longer about perimeter defence or reactive patching, but...
