Google’s Mandiant: North Korean hackers are using AI deepfakes to target crypto firms and DeFi players Google Cloud’s threat team Mandiant is warning that a North Korea–linked hacking group is now using AI-generated deepfake video inside fake video calls to social-engineer cryptocurrency professionals and steal funds. What happened - Mandiant says it investigated a recent breach at a fintech firm attributed with high confidence to UNC1069 (aka “CryptoCore”), a DPRK-linked actor. - The intrusion chain was highly social: attackers used a compromised Telegram account to pose as a known industry contact, sent a Calendly link for a 30-minute meeting, and hosted a spoofed Zoom call on their own infrastructure. - During the call the victim saw what appeared to be a deepfake video of a well-known crypto CEO. The attackers then claimed audio issues and instructed the victim to run “troubleshooting” commands — a ClickFix technique that executed malicious code. - Forensic analysis uncovered seven distinct malware families on the victim’s machine, apparently deployed to harvest credentials, browser data and session tokens for financial theft and future impersonation. Why this matters to crypto and DeFi - Mandiant says UNC1069 is targeting both companies and individuals across the crypto ecosystem — software teams, developers, venture firms and executives. - The campaign illustrates a broader shift: state-linked thieves are moving away from mass phishing and instead carrying out fewer, highly tailored operations that exploit routine trust in calendar invites, messages and video meetings. The result: bigger heists from more surgical attacks. - The trend coincides with a jump in DPRK-linked crypto thefts: Chainalysis reported $2.02 billion stolen in 2025 (a 51% increase year-over-year), bringing total attributable thefts to about $6.75 billion. Expert perspective Fraser Edwards, CEO of decentralized identity firm cheqd, told Mandiant that these attacks succeed because everything appears normal — a familiar sender, a routine meeting, no suspicious attachments. Deepfakes are typically introduced at escalation points (live calls) to short-circuit doubts and push the victim to act. He also warned that AI is used beyond live calls to craft messages, mirror tones and generally make impersonation harder to detect. As AI agents become part of everyday workflows, attackers could automate deepfake deployment, scaling these impersonation attacks. What defenders should do - Mandiant has published detailed TTPs and IOCs for detection and hunting; crypto firms should review them and harden controls. - Practical steps include verifying meeting invites through secondary channels, avoiding running troubleshooting commands requested on calls, enforcing strong endpoint protections and MFA, and improving systems that signal authenticity (rather than relying on user instinct). Takeaway This campaign marks a dangerous escalation: North Korean threat actors are combining AI-driven impersonation with traditional malware and social engineering to hit the crypto sector. Organizations that rely on remote coordination and rapid decision-making should assume these techniques are being used and prioritize verification, detection, and endpoint defenses. For the full technical details and indicators of compromise, see Mandiant’s report and observability guidance. Read more AI-generated news on: undefined/news