Google’s threat intelligence team is warning the crypto and security worlds: state-backed hackers are increasingly weaponizing popular AI tools — including Google’s own Gemini — to speed up reconnaissance, craft hyper-personalized phishing lures, and even help build malicious code. What Google found - The Google Threat Intelligence Group (GTIG) says model-extraction attempts are on the rise. In model extraction, an attacker repeatedly queries an AI service to infer its internal logic and recreate the model — essentially stealing intellectual property and potentially creating a copy that can be used without safety controls. - More worrying to GTIG than theft is the way government-backed actors are using large language models (LLMs) for technical research, target profiling, and automated generation of sophisticated phishing content. The report flags activity linked to DPRK, Iran, China and Russia. - LLMs let attackers scale open-source intelligence (OSINT) collection and produce hyper-personalized lures that replicate the tone and cultural nuance of legitimate organizations. “This activity underscores a shift toward AI-augmented phishing enablement,” Google writes. “Targets have long relied on indicators such as poor grammar, awkward syntax, or lack of cultural context to help identify phishing attempts. Increasingly, threat actors now leverage LLMs to generate hyper-personalized lures that can mirror the professional tone of a target organization.” - With a target’s biography or online footprint, models like Gemini can produce realistic personas and scenarios tailored to extract trust or credentials. AI also improves translation and localization, removing language barriers that once limited phishing reach. - Growing code-generation capabilities make AI useful for troubleshooting and prototyping malicious tooling, and GTIG sees increasing experimentation with agentic AI — systems that can perform tasks autonomously — which could accelerate malware development and automation. Why crypto firms and users should care - The same AI-powered profiling and personalization techniques that trick corporate employees are highly effective against crypto users: wallet-targeting phishing, SIM-swap social engineering, spear-phishing of exchange staff, rug-pull impersonations, and more. - Model extraction and AI-enabled toolchains could also be leveraged to analyze smart-contracts at scale, attempt automated exploit generation, or create tailored scams against high-value crypto targets. How Google is responding - Google says it’s working on multiple fronts: publishing GTIG reports, running dedicated threat-hunting teams, and hardening Gemini and other models to reduce misuse. Through DeepMind and other internal efforts, Google aims to identify and remove malicious functions before they can be weaponized. - Importantly, GTIG notes that while use of AI in the threat landscape has increased, there are yet no “breakthrough” offensive capabilities — the immediate risk is scale and refinement, not a new class of unstoppable tools. Bottom line AI is lowering the technical and financial barriers for sophisticated social engineering and malware development. For the crypto ecosystem — which remains a prime target for credential- and fund-centric attacks — the report is a reminder to double down on phishing-resistant controls, multi-factor authentication, staff training, and threat monitoring while vendors and defenders work to harden AI platforms. Read more AI-generated news on: undefined/news