Cybersecurity is the engine, not the brake, for Australia's AI ambition

Avanade Australia Pty Ltd

By Russell Todd, Security Solutions Lead, Avanade Australia
Thursday, 04 December, 2025


Cybersecurity is the engine, not the brake, for Australia's AI ambition

The discussion in Australia’s C-suites is loud and clear: AI is the future of growth. Research from Avanade shows that mid-market leaders are already bullish, with 86% planning to increase generative AI budgets and many expecting up to a fourfold return within 12 months.

Yet this urgent push for AI often stalls when it hits the firewall of cybersecurity.

For too long, security has been seen as the necessary evil — the department of 'No' that slows innovation. This mindset is fundamentally broken. Today, you cannot talk about AI adoption without talking about cybersecurity, and you cannot talk about modern cybersecurity without addressing AI. The two are inextricably linked — one delivers the speed and revenue; the other delivers the safety and sustainability.

Cybersecurity is not the brake on AI; it must be the enabling engine.

Regulatory uncertainty is slowing AI uptake

But even as investment momentum accelerates, many organisations remain uncertain about how to adopt AI responsibly, effectively and securely, Recent research from Cisco and the Governance Institute of Australia shows that many organisations are still hesitating to fully adopt AI, citing ethical concerns, security and privacy challenges, and operational hurdles. Almost two-thirds of businesses do not provide AI training, and most cannot effectively measure the return on their AI investments.

This hesitation is not just a cultural issue — it has real financial consequences. The same research estimates that Australian businesses could leave $142 billion on the table by 2030 due to a lack of clear AI governance. The federal government had previously pledged a national AI plan by year-end and is updating voluntary AI safety standards, but the absence of mandatory rules has created uncertainty.

Cybersecurity provides a practical solution to this impasse: by embedding secure and responsible practices into AI initiatives, organisations can adopt innovation confidently, even amid evolving regulations.

AI is not the problem — it's the spotlight

The real barrier isn’t AI itself, but the decades-old issue of poorly managed, fragmented data. For years, organisations have accumulated inconsistent, siloed and inadequately governed data — a problem that became even more pronounced during the pandemic’s rapid shift to the cloud, where speed often trumped governance.

Today, tools like Microsoft Copilot and other LLM-powered systems surface and synthesise whatever data exists. If that data is unstructured, low-quality or poorly secured, AI will simply expose those weaknesses faster and at scale.

Avanade’s research reinforces this, with two in three Australian business and IT leaders saying poor data quality and governance are holding back their AI progress. Those tackling this foundational work are already seeing positive impact, with 85% implementing or developing new data governance platforms.

The two faces of AI in security

Something I like to keep in mind and keep reminding my team is that AI is only as secure as the environment it operates in, and conversely, it’s the most powerful tool we have to stay ahead of threats if implemented correctly. I see it two ways: the inherent risk in the AI environment and AI as a support for cybersecurity efforts.

1. Securing the AI environment (the risk)

Organisations often underestimate the risk of AI accessing sensitive information. It’s not just about compliance — it’s about trust, both internally and with your customers. Securing the AI environment goes beyond traditional measures; it’s about governance, control and building AI-ready systems:

  • Governing access: AI agents, including copilots, must never have unchecked access to sensitive information such as customer records or financial data. Responsible AI guidelines are essential and it’s critical to define boundaries for AI from the outset — clear policies prevent mistakes before they happen.
  • Modernising the core: AI performs best on clean, structured and well-governed data. That’s why almost all Australian leaders (94%) are fast-tracking the modernisation of legacy systems. Security isn’t just a safeguard here; it’s a guiding principle. Modernising systems isn’t just about technology; it’s about enabling AI to work safely and effectively because technical debt can be a hacker’s playground if ignored.
2. AI as a security force multiplier (the enabler)

Today, security teams are under immense pressure to stay ahead of AI-driven threats, but AI doesn’t replace human judgement — it amplifies it. Analysts can focus on the strategic, high-value work that machines can’t replicate.

  • Elevating the human: Tools like Copilot for Security synthesise vast volumes of threat intelligence in real time, freeing analysts from repetitive triage. This empowers them to focus on strategic activities, such as threat hunting and vulnerability evaluation, where their expertise has the greatest impact.
  • Doing more with less risk: Rather than expanding teams with multiple junior hires, AI enables security teams to deliver higher-value work at scale. AI allows security teams to become enablers rather than blockers. They can give informed, rapid go-ahead on new initiatives rather than defaulting to a cautious ‘No’. That’s how organisations stay agile in a fast-moving threat landscape.

The leader’s mandate: holistic confidence

The shift required of Australian CEOs and CISOs is one of leadership and investment priority. The financial service sector, for example, has already proven that foundational security is key to resilience.

Our collective mandate is to adopt a holistic approach. Don't spend millions on fragmented security products that overlap. Instead, invest strategically in foundational maturity and workforce fluency (77% are increasing investment in training) to ensure that every AI initiative is launched with secure guardrails.

By treating cybersecurity as the core strategic pillar that validates and accelerates AI initiatives, Australian businesses can confidently capture the anticipated revenue gains and maintain a competitive edge in this new era of digital transformation.

Image credit: iStock.com/Just_Super

Related Articles

Avoiding the AI bottleneck: why data infrastructure matters for high‍-‍performance ambitions

In Formula 1, championships are won not by the teams with the biggest engines, but by those who...

Uber Eats reimagined container delivery: Kubernetes is doing the same

The popularity of Kubernetes has skyrocketed in the last few years, and like Uber Eats, it has...

The roadblocks to success in enterprise application strategies

Only 53% of business cases for new enterprise application projects are currently being approved.


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd