- Deepfake selfies can now bypass traditional verification systems
- Scammers are exploiting AI to create synthetic identities
- Organizations Should Adopt Advanced Behavior-Based Detection Methods
The latest AU10TIX Global Identity Fraud Report reveals a new wave of identity fraud, largely driven by the industrialization of AI-based attacks.
With millions of transactions analyzed from July to September 2024, the report reveals how digital platforms across sectors, particularly social media, payments and cryptocurrencies, are facing unprecedented challenges.
Fraud tactics have evolved from simple document forgeries to sophisticated synthetic identities, deepfake images, and automated robots that can bypass conventional verification systems.
Social media platforms experienced a dramatic escalation in automated bot attacks in the run-up to the 2024 US presidential election. The report reveals that social media attacks accounted for 28% of all fraud attempts in the third quarter of 2024, a notable jump from just 3% in the first quarter.
These attacks focus on disinformation and large-scale manipulation of public opinion. AU10TIX says these bot-driven disinformation campaigns employ advanced elements of generative AI (GenAI) to avoid detection, an innovation that has allowed attackers to scale their operations while evading traditional verification systems.
GenAI-powered attacks began to escalate in March 2024 and peaked in September and are believed to influence public perception by spreading false narratives and inflammatory content.
One of the report's most surprising discoveries involves the emergence of 100% deepfake synthetic selfies: hyper-realistic images created to imitate authentic facial features with the intention of bypassing verification systems.
Traditionally, selfies were considered a reliable method of biometric authentication, as the technology needed to convincingly spoof a facial image was out of reach for most fraudsters.
AU10TIX highlights that these synthetic selfies pose a unique challenge to traditional KYC (Know Your Customer) procedures. The change suggests that, in the future, organizations that rely solely on facial comparison technology may need to reevaluate and strengthen their detection methods.
Additionally, fraudsters are increasingly using AI to generate variations of synthetic identities with the help of “image template” attacks. These involve manipulating a single ID template to create multiple unique identities, complete with random photo elements, document numbers and other personal identifiers, allowing attackers to quickly create fraudulent accounts across platforms by leveraging AI to scale the creation of synthetic identities.
In the payments sector, the fraud rate saw a decrease in the third quarter, from 52% in the second to 39%. AU10TIX attributes this progress to increased regulatory oversight and law enforcement interventions. However, despite the reduction in direct attacks, the payments industry remains the most frequently targeted sector and many fraudsters, deterred by greater security, redirect their efforts towards the cryptocurrency market, which accounted for 31% of all attacks in the third trimester.
AU10TIX recommends that organizations go beyond traditional document-based verification methods. A key recommendation is to adopt behavior-based detection systems that go beyond standard identity checks. By analyzing patterns in user behavior, such as login routines, traffic sources, and other unique behavioral signals, companies can identify anomalies that indicate potentially fraudulent activity.
“Fraudsters are evolving faster than ever, leveraging AI to scale and execute their attacks, especially in the social media and payments sectors,” said Dan Yerushalmi, CEO of AU10TIX.
“While companies use AI to strengthen security, criminals are weaponizing the same technology to create synthetic selfies and fake documents, making detection nearly impossible.”