Four ways AI can finally make threat intelligence useful and not just noisy
For years, many CISOs have been told that threat intelligence is the missing piece in their security program. In other words, the glue that would finally tie risk, controls and decision-making together. Yet inside most organisations the reality is very different. Threat intelligence feeds are noisy. Reports often arrive out of context or too late to make a difference, and the sheer volume of data makes it difficult for already-stretched security teams to turn intelligence into action.
That experience isn’t unique. According to Google Cloud research cited in ISACA’s recent white paper, Building a Threat-Led Cybersecurity Program with Cyberthreat Intelligence, 61% of cybersecurity professionals say they’re overwhelmed by the number of intelligence feeds coming in, and nearly the same number say they can’t make the intelligence actionable. The outcome is predictable: millions are spent, minimal operational value is gained and threats continue to slip through the cracks.
But the threat environment isn’t standing still: attackers are stealing credentials at scale, buying and selling access to networks, and using generative AI to speed up their operations. To keep up and stay ahead, defenders need to rethink how they use threat intelligence. The ISACA white paper shows that a modern, threat-led approach, supported by AI and automation, can finally turn intelligence into real operational value.
Below are four practical ways AI can do exactly that.
1. Using LLMs to analyse initial access broker activity at scale
The cybercrime ecosystem increasingly resembles a mature market, complete with supply chains, brokers and marketplaces. Initial access brokers (IABs) are central players, selling entry points into enterprise networks. But their posts are often cryptic, inconsistent and buried across dozens of dark-web forums and encrypted channels.
Historically, manually analysing IAB chatter has been slow, labour-intensive work. A human analyst might review a few dozen posts a day; meanwhile, thousands more are published.
Large language models change that equation.
LLMs can be used to automatically identify IAB listings, extract structured information from unstructured posts, and flag listings relevant to a specific organisation or sector. This significantly reduces manual triage time and helps analysts focus on the highest-risk access for sale, the kind that precedes ransomware incidents.
2. Prioritising breached identities using automated classification
Infostealer malware has become one of the most damaging and underestimated drivers of enterprise compromise. The white paper notes that millions of ‘stealer logs’ are sold annually, and nearly a third originate from enterprise-licensed environments. This means corporate credentials, session cookies, browser-stored passwords and sensitive tokens are ending up in criminal markets at unprecedented rates.
The challenge? No human team can meaningfully triage that volume of exposure.
AI-assisted prioritisation is now essential. By automatically classifying breached identities based on factors such as domain sensitivity, privileged access, critical system relevance, active sessions and MFA posture, security teams can immediately escalate the exposures that really matter.
This flips the model in that instead of drowning in alerts, teams receive a structured, risk-ranked queue of high-priority exposures. For organisations running hybrid identity environments, this is one of the most impactful steps they can take.
3. Automating credential verification and remediation
Most Australian enterprises still grapple with credential-based intrusions. Attackers know it: they continue to target the easiest path of reusing stolen credentials purchased from stealer logs or phishing kits.
ISACA’s guidance emphasises the value of establishing automated workflows that:
- verify whether exposed credentials are active
- force immediate resets or token revocations
- track patterns to identify systemic gaps in authentication controls.
When integrated into identity governance or SOAR systems, this automation dramatically reduces mean time to response (MTTR). The alternative, manually resetting accounts after threat intelligence reports surface, is no longer viable.
This is an area where AI can help prevent intrusions outright.
4. Refining IoC feeds so analysts aren’t overwhelmed
Technical threat intelligence like indicators of compromise (IoCs) remains one of the most established forms of intelligence and one of the most problematic. Organisations often ingest too many feeds, don’t deprecate stale indicators, and inadvertently flood security tools with false positives.
AI-driven curation solves much of that.
ISACA recommends using automation to score, age, cluster and prioritise indicators based on:
- the organisation’s threat model
- the reliability and historical accuracy of the source
- indicator age and relevance
- alignment with known active threat groups targeting the sector.
Instead of overwhelming SOC analysts, curated IoC feeds improve signal-to-noise ratios and sharpen threat-hunting activities.
Why traditional threat intelligence programs fail and how to fix them
During my career I’ve seen some of the same themes repeated:
- Too many feeds, not enough context.
- Intelligence that isn’t tied to risk.
- Manual processes that can’t scale.
- Stakeholders who aren’t aligned on priorities.
ISACA’s white paper addresses these systemic issues and helps guide tech professionals with a practical blueprint for building or strengthening a modern threat intelligence program. It emphasises building priority intelligence requirements (PIRs) tied to the organisation’s unique threat model and risk appetite. When PIRs are structured well — specific, actionable, measurable and time-bound — intelligence stops being generic and becomes directly tied to business outcomes.
The result is a threat-led approach where:
- controls are adjusted based on real attacker behaviour
- SOC teams receive intelligence that is relevant and timely
- executives can tie intelligence to risk-reduction, not report-volume.
Done poorly, threat intelligence is noise. But done well, it becomes one of the most powerful accelerators of cyber resilience.
Australia’s top tech priorities for 2026
It is anticipated that AI will evolve from a pilot project to a productive standard, underpinned...
Why AI's longevity lies in utility, not novelty
The real potential of AI is in underpinning the invisible systems powering everyday business.
The importance of effective security when deploying AI tools
More than four in five developers use AI coding tools daily or weekly, and 59% run at least three...
