IT Brief US - Technology news for CIOs & IT decision-makers
Story image

AI deepfakes drive surge in cyberattacks, threatening trust online

Today

A recent surge in sophisticated cyberattacks has underscored the growing threat posed by AI-driven deepfake technology. Security experts warn that such attacks, ranging from AI-generated advertisements to real-time deepfaked video calls, are targeting both consumers and businesses by mimicking familiar figures and trusted institutions.

In Canada, a wave of phishing incidents has been observed on Instagram, where scammers deploy AI-generated advertisements masquerading as leading financial institutions. These convincing ads, complete with official branding, draw users to malicious websites with the sole purpose of harvesting sensitive personal data.

Aditya Sood, Vice President of Security Engineering and AI Strategy at Aryaka, described this trend as a "significant and immediate threat to financial security and public trust." Sood highlighted that attackers exploiting synthetic media can "dupe users into divulging sensitive information or making unauthorized fund transfers, resulting in substantial financial losses." He warned that these scams are rapidly eroding confidence in digital communications, as their sophistication often allows them to sidestep existing platform moderation and regulatory oversight.

According to Sood, the rapid advancement of artificial intelligence has provided cybercriminal groups, such as Scattered Spider and Salt Typhoon, with tools to execute attacks that frequently outpace traditional defences. "The proliferation of artificial intelligence has been a double-edged sword," he said. "Cyberattacks have also become more advanced, taking advantage of security defenses that are unable to keep pace. This power imbalance has become increasingly evident over time."

Sood called for more robust preventative measures from digital platforms. He emphasised the need for AI-powered detection tools that can identify manipulated media and impersonation before fraudulent ads are published. Sood also advocated for verified branding measures including digital watermarks, official account verification, and provenance metadata to help users distinguish genuine content from forgeries.

Collaboration across industries was another critical recommendation. "Close collaboration between social media platforms, financial institutions, and law enforcement is essential to ensure the rapid takedown of fraudulent ads and to limit their impact," Sood advised. He further recommended technical controls such as network segmentation, virtual LAN quarantining, and zero-trust network access to restrict the movement of attackers in the event of a breach.

In a parallel development, new campaigns linked to the North Korean BlueNoroff hacking group—also known as Sapphire Sleet or TA444—have reportedly leveraged deepfake technology during video calls. Attackers digitally impersonate company executives on platforms such as Zoom to trick staff into downloading malware onto macOS devices. The primary motive behind these efforts is suspected to be cryptocurrency theft, echoing tactics previously associated with the group.

Randolph Barr, Chief Information Security Officer at Cequence, noted that this marks a major escalation in attack sophistication. "The use of AI-generated deepfakes in real-time video calls, combined with personalized social engineering, represents a major shift," Barr said. He pointed out that even the most security-conscious professionals might be deceived when interacting with someone who convincingly appears to be a high-ranking colleague.

Barr challenged the often-cited notion that users are always the weakest link, describing it as "outdated and unfair" in the face of such evolved threats. "This is no longer about user error—it's about control failure," he stated. He underscored the need for robust technical controls including mobile device management that restricts privilege levels, endpoint detection and response solutions with real-time monitoring, application whitelisting, and behavioural analytics.

"Behavioural monitoring allows us to focus not just on what an application is, but what it does. It enables detection of anomalies such as a user installing a new app, initiating unexpected network connections, or creating persistent agents—all signs that something's gone wrong," Barr explained. He urged companies to adopt layered defences integrating user education with strong endpoint controls and real-time behavioural monitoring.

Both Sood and Barr were united in their assessment that the evolving threat landscape demands not only agile technical solutions but also a rethink of collaboration and policy. With deepfake technology continuing to advance, the challenge will be to ensure organisations and individuals remain a step ahead of cybercriminals who are increasingly adept at exploiting the trust placed in familiar digital interfaces.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X