Increased use of AI-driven tools to create deepfake content has raised new concerns about public safety.

As the technology becomes more advanced and widely available, questions are also being raised about the reliability of visual identity verification systems used by centralized exchanges.

Governments are taking action against deepfakes

Misleading videos spread rapidly on social media platforms and amplify concerns about a new wave of disinformation and fabricated content. The growing misuse of this technology is increasingly undermining public safety and personal privacy.

The problem has reached new heights, and authorities around the world have passed laws making the use of deepfakes illegal.

This week, Malaysia and Indonesia became the first countries to impose restrictions on Grok, the artificial intelligence chatbot developed by Elon Musk's xAI. Authorities stated the decision came after concerns about misuse to generate sexually explicit and non-consensual images.

California's attorney general Rob Bonta announced a similar measure. On Wednesday, he confirmed that his office is investigating multiple reports of non-consensual, sexually explicit images of real individuals.

"This material, showing women and children in naked and sexually explicit situations, has been used to harass people online. I urge xAI to implement immediate measures to ensure this does not spread further," Bonta stated in a press release.

Unlike earlier deepfakes, newer tools can dynamically respond to queries. They convincingly imitate natural facial movements and synchronized speech.

Therefore, simple checks such as blinking, smiling, or head movements can no longer reliably verify a user's identity.

These advancements have direct consequences for centralized exchanges that rely on visual verification in the onboarding process.

Centralized exchanges under pressure

The economic impact of deepfake-based fraud is no longer theoretical.

Industry observers and technology researchers have warned that AI-generated images and videos are increasingly appearing in contexts such as insurance claims and legal disputes.

Cryptocurrency platforms, which operate globally and are often dependent on automated onboarding, could become an attractive target for such activity if protective measures are not developed in pace with technology.

As AI-generated content becomes more accessible, trust based solely on visual verification is no longer necessarily sufficient.

The challenge for cryptocurrency platforms is to adapt quickly, before the technology outpaces the protective measures designed to ensure secure users and systems.