Finding the middle ground for zero-day vulnerabilities

Arbor Networks

By Scott Crane, Director of Product Management, Arbor Networks Australia
Tuesday, 07 October, 2014


Finding the middle ground for zero-day vulnerabilities

Zero-day vulnerabilities are a fact of life in cybersecurity. A zero-day vulnerability refers to an issue in a particular product or application that can be exploited, where the issue is either unknown or a patch is not yet available.

Threats that target zero-day vulnerabilities are a key issue, as security solutions often fail to detect them because they don’t know what they should be looking for.

This enables the attacker to gain an undetected foothold within a network, and once inside they can sometimes steal information for extended periods while remaining hidden.

Most organisations have focused their security on preventing threats from entering their networks. To this end security architectures tend to involve layered solutions at the network perimeter. Once a threat has made it through these solutions, many organisations have very limited threat detection capabilities.

However, security strategies are changing and the ease with which attackers can build new malware variants, obfuscate known threats and manipulate network traffic to bypass security solutions is driving organisations to focus more on being able to detect threats that are already inside their network much more quickly.

Why do we need to Loop?

Traditionally, security solutions compare traffic or network telemetry data to current threat intelligence information in near real time. This allows them to detect threats that have been seen and analysed elsewhere.

Some solutions extend these capabilities with heuristic, behavioural and sandboxing mechanisms to identify suspicious behaviours or traffic patterns to try and prevent zero-day exploits and new malware variants getting through.

What is common to all of these technologies is that they look at traffic (or telemetry) once, and if nothing is identified (given current information) they simply move on.

As we all know from simply looking at the media, or from our own experience, threats are getting through these defences and to an extent organisations should now expect this.

This is becoming a hot topic, with organisations now starting to look at how they can more quickly detect threats that have made it inside their networks.

If we look at data from this year’s Verizon Data Breach Report, we can see a continuation of the trend where a high proportion of assets can be compromised in days (or less), but a relatively low proportion of threats are detected in days - in fact the time to detect can be much longer.

Looping aims to reduce the time to detect a threat that is already inside an organisation.

Highest fidelity of data

What we are doing here with looping is paralleled in other walks of life; in athletics, for example, samples taken from athletes are now routinely stored for extended periods so that they can be tested for new types of doping as they come to light and tests become available. The idea is to catch cheats, even if the offence occurred in the past.

Looping is a very simple concept: threat intelligence data evolves over time as new data is gathered, vulnerabilities identified and new threats and threat variants are analysed; if we could retrospectively and repeatedly apply new threat intelligence data to historical network traffic we should be able to detect threats which made it through our perimeter defences undetected (in the past).

One barrier to doing this is the fact that we need the highest fidelity of base information for this to work - we need historical packet captures. If we have these, and up-to-date threat intelligence feeds, then we just need a mechanism for quickly, easily and repeatedly applying one to the other - along with a way of visualising and investigating any results.

One thing is certain - as our network and service architectures continue to become more complex and more porous, and attackers continue to succeed at overcoming our defences, it is increasingly important for organisations to be able to identify any breach as quickly as possible.

The costs involved in intellectual property or customer information loss can be significant, and minimising the time attackers have to leverage any foothold within an organisation is key.

Technologies which provide incident response teams with the ability to quickly analyse and identify both current and historic security breaches are becoming more and more necessary to combat today’s threats.

Image courtesy Don Hankins under CC

Related Articles

The problem with passwords is not what you think

When it comes to secure authentication, there seems to be a lesson we're not learning.

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd