How businesses can tackle AI-powered deep fake attacks

Trend Micro Australia Pty Ltd

By Mick McCluney, Technical Director at Trend Micro, ANZ
Friday, 15 December, 2023


How businesses can tackle AI-powered deep fake attacks

The widespread use of generative artificial intelligence (AI) platforms like ChatGPT is leading to step changes in productivity for organisations in areas such as content creation. However, cybercriminals are also leveraging the productivity perks that such technology offers.

Just as platforms like ChatGPT can create convincing and sophisticated content automatically, other generative AI platforms such as VALL-E can simulate real human voices. Naturally, cybercriminals are leveraging this kind of technology to create convincing audio content, too.

Cybercriminals have used such AI-generated ‘deep fake’ audio to simulate kidnapping scenarios and convince innocent people to hand over ransom money. While this kind of virtual kidnapping is usually targeted at individuals, organisations are also at risk of other types of impersonation exploits using similar tactics and technology.

Impersonation scams leveraging text, phone, social media and email, including business email exploits, may use generative AI’s ‘deep fake’ capabilities to create sophisticated and convincing messages. In the business world, cybercriminals tend to target employees with email, text and cloned voice messages that appear to be from their superiors or other business associates.

In such cases, malicious actors sometimes make use of publicly available information about targets, the people they work with or the organisations for which they work via social media platforms. It’s not unknown for malicious actors to leverage a high-resolution video of an individual’s face and voice, for example, to create a deep fake persona.

The steps involved in an impersonation scam or deep fake attack do not require a huge amount of skill or knowledge, given the tools available. The advent of easily accessible generative AI platforms designed for use by the criminal community is providing additional scale and capability to bad actors who may have previously lacked the technical skill to accomplish such exploits.

Looking ahead, this scale is only likely to accelerate. It’s possible that the criminal use of victim propensity modelling, for instance, will enable the extraction of large lists of potential victims, expediting the targeting and personalisation of fraudulent messages.

But that doesn’t mean businesses and individuals can’t do something about it today.

Practical steps to protecting ourselves

Given the reliance of some exploits on information posted publicly on social media platforms, the implementation of a strong social media policy within organisations is a great first step to help protect individuals and businesses from attacks that make use of impersonation, voice cloning and virtual kidnapping methodologies.

One very simple way to protect individuals and businesses is to use lower quality media published online, or even apply blurring to certain images or videos. The higher the quality of video or image posted on a social media platform, the easier it is for cybercriminals to extract biometric data with which to produce a deep fake copy.

Additionally, simply being mindful of the kind of personal data shared on social media platforms, either as part of users’ profile information, or in posts and updates, can dramatically lower the chances of a cybercriminal collecting sensitive information that they may then use to create a personalised message to carry out an exploit such as a business email compromise campaign.

The technology tackling deep fake attacks

As things like deep fake attacks and virtual kidnapping scams become more achievable and widespread, the movement of more traditional techniques used in cybercrime, for instance ransomware attacks, is likely to involve harder-to-block communication paths, such as voice and video. But that doesn’t mean they can’t be stopped.

A very effective and simple first step to protect business networks against potential exploits is the use of multifactor authentication, which should really be standard for any organisation handling sensitive data and information in this day and age.

The high-context communication paths and environments arising from biometric-powered exploits involve a level of abstraction that goes beyond what typical ‘router-level’ security solutions can usually handle. With this in mind, identity-aware anti-fraud techniques can be employed to identify and intercept such attacks before they have the chance to hit their target.

Moreover, with every additional advanced attack, the amount of telemetry also increases. This is actually a good thing, as this data can be used to make pertinent improvements in security analytics, which could then be executed in identity-aware security sensors.

Future defences

Cybercriminals may have access to rapidly evolving technology with which to carry out their exploits, but so do the rest of us. Organisations have at their fingertips an array of technology, tools and best practices that are also evolving.

Virtual kidnapping scams rely on voice and video files to extort victims. Historically, these have not normally been picked up by security software. But as data context-aware networks — that is, networks in which decisions are made on the relationships of data rather than simple triggers based on single values — become more sophisticated, security tools may soon be able to take multiple telemetry types and apply signals to these high context abuses.

For example, a multi-layered identity-aware system might be able to determine if the subjects of fraudulent voice-cloned messages are where they say they are, based on their phone telemetry, and determine whether they could in fact be making the call, as claimed.

By and large, the benefits that generative AI platforms offer us far outstrip any potential drawbacks, even as cybercriminals leverage them to make their attacks and exploits more sophisticated. With the right combination of practical precautions and security technology, businesses can protect themselves and their workers.

Image credit: iStock.com/wildpixel

Related Articles

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...

How attackers are weaponising GenAI through data poisoning and manipulation

The possibility for shared large language models to be manipulated through data poisoning...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd