The increasing use of AI-driven tools to create deepfake content is once again raising concerns about public safety.

As the technology becomes increasingly advanced and more accessible, questions are also arising about the reliability of visual identity verification at centralized exchanges.

Governments are taking action against deepfakes

Misleading videos spread rapidly on social media platforms, fueling growing concerns about a new wave of disinformation and fake content. The increasing misuse of this technology is increasingly undermining public safety and personal integrity.

The problem is increasingly taking serious forms, with governments worldwide introducing laws to make the use of deepfakes illegal.

This week, Malaysia and Indonesia became the first countries to restrict access to Grok, Elon Musk’s xAI artificial intelligence chatbot. According to authorities, this was done due to concerns over misuse of Grok to create sexually explicit and non-consensual images.

California's Attorney General, Rob Bonta, announced a similar measure. On Wednesday, he confirmed that his office is investigating multiple reports of non-consensual, sexually suggestive images of real individuals.

"This material, which depicts women and children naked and in sexually explicit situations, is being used to intimidate people online. I urge xAI to take immediate action to prevent further dissemination," said Bonta in a statement.

Unlike older deepfake videos, new AI tools can now respond to cues. They convincingly mimic natural facial movements and synchronized speech.

As a result, simple controls such as blinking, smiling, or head movements may no longer be sufficient to reliably verify someone's identity.

These developments have direct implications for decentralized exchanges that use visual verification during the registration process.

Decentralized exchanges under pressure

The financial consequences of deepfake fraud are no longer merely theoretical.

Industry observers and technology researchers warn that AI-generated images and videos are increasingly appearing in situations such as insurance claims and legal cases.

Crypto platforms, which operate globally and often rely on automated registration procedures, could become an attractive target for such practices if security measures do not keep pace with the technology.

As it becomes easier to create AI content, relying solely on visual verification may no longer be enough.

The challenge for crypto platforms is to adapt quickly enough before technology outpaces the security measures needed to protect users and systems.