Proactive cybersecurity must lead innovation in Australia’s AI future
The recent establishment of the Australian Artificial Intelligence Safety Institute (AISI) represents a critical step in advancing the National AI Plan’s commitment to proactive safety. As artificial intelligence becomes deeply embedded across the economy, AISI’s role in identifying, managing and mitigating emerging risks, including cyber threats, will be essential to ensuring AI delivers societal and economic benefit while keeping Australians safe.
The urgency of proactively addressing AI-driven cyber risk is now undeniable. Over the past year alone, the cyber threat landscape has undergone a profound shift. AI has evolved from a defensive capability used to enhance detection and response into a powerful offensive tool leveraged by attackers. The scale, speed and sophistication enabled by AI now exceed the capacity of traditional security approaches that rely heavily on human intervention and retrospective analysis.
Nation states and organised cybercriminal groups are already deploying AI to identify zero-day vulnerabilities, automate exploitation chains and convincingly mimic human behaviour. These capabilities allow malicious actors to operate continuously, adapt in real time and execute attacks with unprecedented efficiency. The result is a fundamentally asymmetrical threat environment, where attackers can innovate faster than defenders who choose not to embrace the technology and instead rely on fragmented tools and manual processes. It is essential that security teams avoid this position and instead treat AI as a core component of their security programmes.
Emerging cyber threat scenarios magnify the challenge
AI-powered financial system manipulation is a growing concern. Autonomous trading bots and AI-generated deepfakes can be used to manipulate equity markets, commodities and digital assets at machine speed. By impersonating regulators, executives or market participants, AI systems can disseminate false earnings announcements, fabricate investor briefings or simulate market shocks. These actions could trigger cascading losses within seconds, far outpacing the ability of human operators to intervene and undermining confidence in global financial markets.
Another significant risk is the rise of synthetic identities. AI-generated personas are increasingly capable of infiltrating banking platforms, healthcare systems, social networks and even electoral infrastructure. These synthetic entities can transact, vote, influence public discourse and form coordinated fake movements at scale. As identity verification systems become overwhelmed, trust in digital identity itself is eroded, creating systemic risk across both public and private sectors.
AI-directed hybrid warfare further blurs the line between cyber, informational and physical domains. Both state and non-state actors can deploy autonomous AI agents to coordinate cyberattacks, disinformation campaigns and physical disruption simultaneously. These attacks require relatively limited resources yet can inflict significant economic and social harm. For example, AI could be used to disable transport logistics, disrupt energy grids and launch coordinated misinformation campaigns in parallel, placing critical infrastructure, government agencies and defence logistics under synchronised pressure from actors with minimal technical capability.
Supply chain integrity is also under increasing threat. AI-enabled attackers can infiltrate software and firmware supply chains through subtle, almost undetectable modifications. Malicious logic can be embedded into commonly used open-source libraries, cloud services or IoT firmware and propagated across thousands of organisations. These compromises may remain dormant for months before being activated, resulting in widespread operational disruption across entire industries.
Finally, data theft and coercion are entering a new phase. Adversaries are already harvesting and stockpiling encrypted data today with the expectation that future advances in quantum computing will render it readable. In parallel, AI systems can analyse stolen data to construct highly targeted blackmail and coercion campaigns against corporations, governments and individuals. These campaigns can exert pressure long before quantum decryption becomes viable, forcing financial, strategic or political concessions.
Security must anticipate, not just react
Confronting these risks requires a transformational shift in how security is designed and operated. Traditional solutions, static risk assessments and manual workflows are no longer sufficient in an environment defined by AI-driven adversaries.
What is required is a unified, intelligent security platform that delivers real-time visibility, automated threat detection and orchestrated response across the entire digital estate. Effective defence depends on the ability to ingest vast volumes of telemetry across endpoints, networks, identities, cloud environments and applications, normalise that data at scale and apply machine learning models that can accurately distinguish between benign and malicious activity.
Equally important is deep integration across security domains, including endpoint detection and response, security information and event management, security orchestration and automation, and cloud security. These integrations enable seamless workflows that compress the time between detection and response, reduce reliance on manual intervention and limit the blast radius of attacks.
| As Australian organisations accelerate the adoption of AI, the AISI’s oversight will play a pivotal role in building trust and confidence. AI presents extraordinary opportunities for productivity, innovation and growth, but it also amplifies risk. Organisations must therefore invest in predictive, autonomous and integrated defence strategies that evolve as quickly as the threats they face. With the right governance and security foundations in place, Australia can harness the benefits of AI while remaining resilient in an increasingly complex threat landscape. | ![]() |
Australia's biggest cyber breaches aren't hacks, they're access failures
The largest cyber breaches have rarely been sophisticated and were often preventable.
Cyber lessons from 2025: why human risk will define 2026
Success in 2026 will come from building security into the rhythm of business, where technology,...
2026 will be the year identity defines cyber defence
2025 gave us an initial look at what happens when AI scales faster than identity controls.

