AI adoption is accelerating, but Is cybersecurity keeping up?
There’s a familiar pattern that plays out every time a major technology shift hits the market. We saw it on the internet. We saw it with cloud. And now we’re seeing it with AI.
The technology moves quickly and for many organisations the business case is compelling, driving rapid adoption. Only later do organisations begin to ask the harder question: are we actually ready for this?
The recent announcement of Anthropic establishing operations in Australia and New Zealand is a clear signal that this cycle is unfolding faster than ever in this region. Organisations across Australia, New Zealand and the broader Asia–Pacific are racing to embed AI tools into everyday workflows; from content creation and software development to customer engagement and business decision-making.
This level of demand is also reflected in the level of investment from global technology providers, with many scaling local operations in the region to support organisations navigating this shift.
Recent data shows that Australian SMBs in particular are increasing their investments into AI tools, with adoption of Anthropic’s solutions growing faster than OpenAI.
Moving at pace is understandable because falling behind is not an option for organisations today, particularly small to medium ones. But in the rush to adopt AI tools, it is also exposing how unprepared many are to manage a new category of risk.
When adoption outpaces readiness
AI tools are being adopted faster than most organisations can secure them. Recent research shows 63% of employees globally are already using generative AI at work, with a significant number admitting to sharing confidential or sensitive data in the process.
For SMBs, this creates immediate exposure, often without clear policies, visibility or technical controls in place. In many cases, AI use is happening unofficially, outside formal IT oversight, and without a clear understanding of where data is going once it leaves the organisation.
At the same time, the threat landscape for Australian SMBs remains highly active. Leak site data shows they account for 71% of ransomware victims, indicating an ongoing gap in preparedness, detection and response capability.
Together, these trends are creating a perfect storm. AI expands the digital attack surface at the same time attackers are becoming faster, more opportunistic and more adept at exploiting gaps in identity, access and visibility.
Why AI changes the nature of cyber risk
Developments from Anthropic such as Project Glasswing and Claude Mythos highlight how quickly AI capabilities are evolving. These systems can autonomously identify vulnerabilities at scale, accelerating the speed with which weaknesses can be found and exploited.
Part of the challenge lies in how AI is being positioned inside organisations.
There is a growing assumption that AI tools can help offset cybersecurity challenges: that they simplify operations, reduce human error or act as ‘plug-and-play’ solutions that improve security outcomes by default. In practice, what we are seeing is more nuanced.
AI accelerates workflows, but it also accelerates risk.
AI tools increase the speed and volume of actions taken inside business systems. They introduce new pathways for sensitive data to be accessed, moved and reused. When deployed without proper guardrails, they can create a false sense of control, giving organisations the impression that risk is being managed when it’s simply moving faster and becoming harder to see.
Cybersecurity has always depended on context: from understanding what normal behaviour looks like to responding quickly when something changes. That requires continuous monitoring, correlation across identities, endpoints, networks and cloud services, and the ability to act in real time.
For organisations without a Security Operations Centre (SOC), or without access to that level of operational capability, those requirements are difficult to meet. This is where many SMBs find themselves today.
This is why a purely DIY approach to securing AI is not realistic for most SMBs. It assumes a level of resourcing and expertise that simply does not exist in many environments today.
AI with service is desired
Many voice scepticism about pure automation. They value partners that combine AI with human expertise. The more effective model is not AI replacing people, but AI with humans-in-the-loop.
Used correctly, AI can play a meaningful role in modern security operations. It can help reduce alert noise, identify high-risk activity faster and accelerate investigation and response workflows. But those benefits only materialise when AI operates within clearly defined guardrails and the ability for human analysts to validate, intervene and take control when required.
This human-in-the-loop approach is not about slowing innovation; it is about ensuring that speed is matched with control.
Encouragingly, organisations adopting this human-in-the-loop model are seeing stronger outcomes. Earlier detection is helping to disrupt attacks before they escalate, and the majority of ransomware incidents are now resolved without payment — reflecting improved response capability and preparation.
AI tool adoption is not slowing down and we embrace this. But as AI becomes embedded in daily operations, the priority must shift from how quickly these tools are adopted to how safely they are supported.
For SMBs, that means ensuring AI initiatives are underpinned by modern security operations — with continuous monitoring, clear visibility across systems and the ability to detect and respond as risk evolves.
The rush to adopt AI isn’t the problem. Adopting it without the cybersecurity foundations to support it is.
Harvest now, decrypt later: why your encrypted data has an expiry date
Quantum communications represent a rare type of disruption: one that is both predictable and...
Supply chains are growing faster than their security
More third parties mean more entry points, more dependencies and more opportunities for...
Identity at the edge
How the sixth annual identity management day highlights the new frontiers of trust.
