What ANZ CISOs must prioritise as AI reshapes security operations

Splunk

By Marc Caltabiano, Regional Vice President, ANZ, Splunk
Thursday, 16 April, 2026


What ANZ CISOs must prioritise as AI reshapes security operations

Artificial intelligence now sits inside most security teams across Australia and New Zealand. Organisations are increasingly turning to AI as alert volumes rise, threats grow more complex and security teams remain understaffed.

According to Splunk’s latest CISO Report, 83% of ANZ CISOs say AI enables their teams to review more security events. At the same time, they remain concerned about hallucinations, governance gaps and unclear oversight, while 86% fear agentic AI will increase the sophistication of social engineering attacks.

This tension shapes the daily work of CISOs who live and breathe risk every day.

AI is helping teams move faster and process more information than ever before, yet it is also introducing new operational and governance challenges that CISOs must now manage.

In conversations with CISOs in ANZ, we’re seeing a consistent theme: as AI becomes embedded across security operations, the focus is shifting from adoption to accountability. CISOs must ensure it is implemented responsibly — this means maintaining clear oversight of AI-driven decisions, strengthening governance around models and automation, supporting teams facing persistent talent shortages and aligning security outcomes with business value.

AI enters security operations but humans must stay in charge

The report shows ANZ security teams already use AI to support detection, triage and incident investigation. AI can summarise logs, correlate alerts and flag unusual activity, helping teams process large volumes of data that would otherwise overwhelm analysts.

In fact, 65% of respondents say AI-driven automation has exceeded expectations, while 93% measure the value AI brings to their security programs through improvements in mean time to detect and mean time to respond.

However, the same technology that improves efficiency also introduces new operational risks. Nearly 89% of respondents are concerned about missed alerts or false positives caused by hallucinations, while half highlighted the risk of critical decisions being made without sufficient human oversight.

In practice, AI cannot be treated as an autonomous decision-maker. Security teams must validate AI outputs, maintain clear guardrails and intervene when automated systems operate outside defined boundaries.

For CISOs, this requires building processes where human oversight remains central. AI outputs should be treated in the same way a team member would treat input taken from a junior analyst; useful for investigations, but still requiring verification before action is taken, particularly when alerts relate to critical assets or regulatory exposure.

Maintaining this balance helps organisations benefit from AI-driven efficiency while ensuring incident response remains grounded in human judgment.

AI governance becomes a core CISO responsibility

As AI becomes part of routine security processes, CISOs take on expanded governance responsibilities. The report shows 84% of CISOs are enhancing their AI governance capabilities and controls, reflecting a growing recognition that security leaders must understand not only what their tools detect, but how AI contributes to those decisions.

Increasingly, CISOs are expected to oversee the behaviour of AI systems in the same way they oversee the behaviour of their employees or third-party partners. That means having visibility into how the models analyse data, how decisions are generated and how outputs can be traced and audited when incidents occur.

It also requires CISOs to spend more time educating executive leadership and boards about how AI is being used in security operations and why governance controls matter.

Ultimately, risk does not transfer to the model or the vendor. It remains with the organisation and with the CISO, who is responsible for setting the policies and guardrails that define acceptable use.

AI lifts productivity, but talent gaps and burnout remain

Even as AI expands the capabilities of security teams, workforce shortages remain one of the biggest challenges facing CISOs. Nearly 50% of CISOs say they do not expect they’ll be able to fill all the skill gaps in their teams.

AI can improve productivity across routine tasks, but it cannot replace the expertise required to understand bad actor behaviours and motives, interpret complex signals or investigate sophisticated incidents.

In many cases, AI is helping security teams work more efficiently, but it is not reducing the pressure placed on experienced analysts. More than a fifth of respondents describe burnout within their teams as significant. Longer alert queues and more complex threats continue to challenge teams’ ability to sustain rapid response over time.

AI can help relieve some of that pressure by analysing patterns, reducing false positives, and automating responses to common security events, allowing teams to focus on the threats that matter most.

CISOs must demonstrate business value

As AI expands across operational workflows, CISOs must also ensure security outcomes connect directly with broader business priorities. This means demonstrating how security improvements support broader operational resilience such as:

  • showing how improvements in detection and response reduce downtime
  • aligning metrics with business continuity goals
  • connecting resilience outcomes to financial and operational risk
  • presenting AI contributions in measurable, not theoretical, terms.
     

Boards now track resilience in the same way they track financial health. CISOs who frame their programs around measurable outcomes help organisations understand why security investments matter.

AI will continue to expand the speed, complexity and scale of security operations, but the fundamentals of cybersecurity leadership remain unchanged. Cybersecurity has always been a team sport, and CISOs are ultimately the ones calling the plays. They set the direction, guide teams through complex incidents and take responsibility.

AI may change how security teams operate, but leadership, accountability and human judgement will remain the difference between reacting to threats and staying ahead of them.

Image credit: iStock.com/BlackJack3D

Related Articles

The cost of downtime in Australia demands disciplined preparation

The commercial exposure created by cyber downtime is now quantifiable and, in many cases,...

The SaaS-Shift is here: how enterprises should respond

Investors and enterprise users are reassessing which platforms will remain essential in an...

AI is now a capital allocation decision and CFOs are setting the terms

The next phase of enterprise AI won't be defined by how many tools organisations deploy, but...


  • All content Copyright © 2026 Westwick-Farrow Pty Ltd