Generative AI and cybersecurity

Darktrace

By Max Heinemeyer, Chief Product Officer, Darktrace
Wednesday, 15 March, 2023


Generative AI and cybersecurity

Imagine this scenario: you receive an email from your CEO asking you to send some information over. It’s written in her exact tone of voice, using the exact language she typically uses, and even references her dog in a joke. It’s precise, accurate and utterly convincing. But it wasn’t even a human. It was crafted by generative AI, using nothing but some basic information fed to it from social media profiles by a cybercriminal.

The emergence of ChatGPT has catapulted AI into the mainstream consciousness, and with it come real concerns about implications for cyber defence. Within weeks of its launch, researchers were able to demonstrate ChatGPT’s ability to write phishing emails, craft malware code and explain how to embed malware into documents.

Adding further fuel to the flame, ChatGPT isn’t the first chatbot to hit the market.

Last month, we also saw Google and Baidu throw their hats into the ring. So, as the tech giants clamour to create the best generative AI, what will it mean for the future of cyber defence?

There is a 30% chance of an Australian business being the victim of a cyber attack, with the average cost of each data breach being $3.35 million.

The barrier to entry likely hasn’t been significantly lowered yet

One of the first questions raised by ChatGPT was that of cybersecurity — could cybercriminals use ChatGPT or other generative AI to make their attack campaigns better? Could it enable entry for would-be threat actors?

ChatGPT is a powerful tool, and its broad-ranging potential use cases can help existing users become more efficient, aggregate knowledge and automate lower-level tasks in a world marked by rapid digital transformation.

However, generative AI isn’t yet the silver bullet that solves everything; it has its limitations. Most obviously, it only knows what it has been trained to do and it requires ongoing retraining. This also means, as we have seen, when data it has been trained on is wrong, so are its answers. Universities and news outlets are reporting concerns about the potential for AI-assisted plagiarism and the spread of misinformation. Therefore, humans are still needed to verify its output — sometimes it can be hard to tell if ChatGPT just made up the content or if its output is based on reliable information.

The same applies to any application of generative AI to cyberthreats. If a criminal wanted to write malware, they would still need to guide ChatGPT through its creation and then double-check the capabilities of the malware. A would-be threat actor needs quite a bit of pre-existing knowledge on attack campaigns to use it effectively — meaning the barrier to entry hasn’t been significantly lowered just yet! When it comes to the technical crafting of attacks, some nuances do still exist — for example in creating credible phishing emails.

Generative AI-powered attacks mean quality over quantity

At Darktrace, we wondered if there was merit to concerns that ChatGPT might cause an increase in the number of cyber attacks targeting businesses — so we did our own research across our customer base. What we found tells a slightly different story.

While the number of email-based attacks has largely remained the same since ChatGPT’s release, the number of phishing emails that rely on trying to trick the victim into clicking a malicious link has actually declined from 22% to 14%. However, the average linguistic complexity of the phishing emails has jumped by 17%. More than 300,000 cybercrimes are committed in Australia each year, with a 13% increase in 2022 from the previous year.

While of course correlation doesn’t mean causation, one theory of ours is that ChatGPT is allowing cybercriminals to redirect their focus. Instead of using email attacks with malicious links or malware attached, criminals see a higher return-on-invest in crafting sophisticated engineering scams that exploit trust and solicit the user to take direct action. For example, it could be used to persuade HR to change salary payment details for the CEO to the bank account of an attacker-controlled money-mule.

Imagine again the hypothetical that we posited at the start: it would take mere minutes for a criminal to quickly scrape some information on a potential victim from their social media accounts and ask ChatGPT to create an email based on that information. Within mere seconds, that criminal would be armed with a credible, well-written and contextualised spear-phishing email ready to send.

A future of machines fighting machines

For nearly 10 years now, we've been predicting a future of AI-augmented attacks, and it seems we may well just be on the cusp of that future now. The generative AI arms race will push tech giants to release the most accurate, fast and credible AI on the market. And it’s an inevitability that cybercriminals will exploit this innovation for their own gain. The introduction of AI, which can also include deepfake audio and video, into the threat landscape will make it easier for criminals to launch personalised attacks that scale faster and work better.

For defenders charged with protecting their employees, infrastructure and intellectual property, the answer will be to turn to AI-powered cyber defence. Self-learning AI on the market today bases its ability to identify and contain subtle attacks through a deep knowledge of users and devices within the organisations it protects. Through learning these patterns of life, it develops a comprehensive understanding of what’s normal for users within the real-world context of everyday business data. The best way to stop hyper-personalised, AI-powered attacks is to have an AI that knows more about your business than external, generative AI ever could.

It’s clear that the introduction of generative AI to the mainstream is tipping the scales towards a war of algorithms against algorithms, machines fighting machines. For cybersecurity, the time to introduce AI into the toolkits of defenders is now.

Image credit: iStock.com/BlackJack3D

Related Articles

Australia is turning a corner in its adoption of passkeys

The past few months have shown that people and enterprises alike are now ready to accept passkey...

Scattered Spider: where every click is one step closer to chaos

Cybercriminal group Scattered Spider often uses social engineering to gain access to identities...

The MediSecure breach thrusts the security spotlight back on service providers

Organisations have been confronting security risks in their supply chains for years, but a new...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd