Cyber attackers using sophisticated deepfake tools: report

Trend Micro Australia Pty Ltd

By Dylan Bushell-Embling
Monday, 14 July, 2025

Cyber attackers using sophisticated deepfake tools: report

Trend Micro has published a new report revealing the scale and sophistication of the use of deepfakes in cyber attacks. The report reveals that cybercriminals have started to leverage deepfakes in a range of ways to attack enterprise targets.

One example is the fake employee scam, whereby attackers create fake personas to apply for a remote work position, use deepfake video streaming tools and AI tools such as ChatGPT to ace interviews and answer technical questions correctly in real time, and once hired use the company-provided laptop to steal internal data or deploy ransomware.

Meanwhile, deepfake-enabled Know Your Customer (KYC) bypass attacks target banking, financial services and insurance companies, and use deepfake pictures and videos to bypass KYC checks and open accounts with false identification. CEO fraud scams involve impersonating a high-ranking executive within a target company and attempting to convince low-level employees to transfer funds or data to the attacker. Deepfake technology can produce real-time video of the executive to support the scams.

Attackers are also using deepfake technology to target consumers, and Trend Micro said it has identified a rise in personalised attacks on individuals such as virtual kidnapping scams that use audio clips from the supposedly captive family member, or fake audio and video material used in romance scams.

According to the report, there are now countless tools available for generating audio and video deepfakes, including some supporting real-time face swapping. More nefarious tools can generate nude images from photos of real or fabricated people, or produce convincing audio fakes using low-cost subscription services.

Trend Micro ANZ Field CISO Andrew Philp said the findings demonstrate how deepfakes have moved beyond hype into real-world exploitation.

“AI-generated media is not just a future risk, it’s a real business threat,” he said. “We’re seeing executives impersonated, hiring processes compromised, and financial safeguards bypassed with alarming ease.

“This research is a wake-up call: if businesses are not proactively preparing for the deepfake era, they’re already behind. In a world where seeing is no longer believing, digital trust must be rebuilt from the ground up.”

Trend Micro is urging businesses to take proactive steps to minimise their risk exposure to deepfake scams, including educating staff on social engineering risks, reviewing authentication workflows, and exploring detection solutions for synthetic media.

To read the full report, ‘Deepfake it ’til You Make It: A Comprehensive View of the New AI Criminal Toolset’, click here.

Image credit: iStock.com/Tero Vesalainen

Related News

Barracuda launches backup solution for Entra ID

Barracuda's Entra ID Backup Premium solution aims to protect customers' Microsoft...

Cloudflare has changed how AI crawlers scrape the internet

Cloudflare is now protecting online IP by blocking AI crawlers by default, and offering a...

Nearly half of Australian companies opt to pay ransoms: report

A recent survey found that Australian ransom payments have decreased from 66% to 41% in the past...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd