Deepfakes, Synthetic Identities, and Agentic AI: The New Era of E-Commerce Fraud

By Markus Bergthaler | March 2026

AI is often celebrated as the solution to fraud prevention. But that's only half the story. Artificial Intelligence is a double-edged sword: it can catch fraudsters, but it also empowers criminals to commit fraud more intelligently, quickly, and at scale than ever before. In my conversations with security experts across the German-speaking region, one thing is clear: deepfakes, synthetic identities, and agentic AI attacks are no longer science fiction. They're happening now, and they threaten your e-commerce business. This article shows you what you need to know and how to protect yourself.

The Numbers Don't Lie

The statistics are alarming. A recent study shows that 93% of companies encountered deepfakes or AI-generated fraud attempts in the past year. This is no longer a niche problem — it's mainstream fraud. In biometric authentication, deepfake usage increased by 58% last year. Synthetic identity fraud costs between 20–40 billion euros globally per year. In Austria and the German-speaking region, we're seeing rapid adaptation: fraudsters are mobile, well-organized, and exploit local vulnerabilities.

Deepfake Fraud: The New Face of Biometric Takeover

Imagine this: a customer attempts to log in and is asked to submit a video selfie to verify identity. But that video is AI-generated — a deceptively realistic deepfake of the account's legitimate owner. Years ago, this required specialized knowledge and expensive software. Today? Tools like Synthesia, D-ID, and others are freely available. The effort is minimal, but the impact is enormous.

How deepfake fraud works:

• The fraudster collects images or videos of the victim (from social media, LinkedIn, public photos).

• He uses AI tools to generate a deceptively realistic video selfie.

• He uses this video to bypass KYC (Know Your Customer) or biometric authentication.

• He now controls the account and can execute payments or steal data.

The alarm bell: most biometric systems still don't reliably detect deepfakes. One test study found that even advanced systems failed to detect 30–40% of deepfakes. That's improving (detection accuracy already reaches 98% with the best systems), but the gap between older and newer technologies is dangerous.

Synthetic Identity Fraud: A Person Who Never Existed

This is subtler and potentially more dangerous. A fraudster creates an entirely new identity — a made-up name, a fabricated (but consistent) history, a synthetic online presence. The goal: open accounts, apply for credit, or make massive purchases before the fraud is discovered.

The process:

• The fraudster combines real and fake data (real credit card numbers with invented names and addresses).

• He creates a consistent online presence: social media profiles, email addresses, transaction history.

• He fools Know-Your-Customer (KYC) checks through deepfakes or manipulated documents.

• He executes high-value transactions and disappears.

In the DACH market, we're seeing rapidly growing trends: synthetic identities are being used to commit fraud in subscription services, high-ticket purchases, and even B2B transactions. Many shops don't notice the fraud until the chargeback arrives.

Agentic AI: Autonomous Multi-Step Attacks

This is the new frontier. Agentic AI refers to AI systems that can autonomously execute complex, multi-step tasks without human intervention between steps. In the context of fraud, that means:

Account-Creation Botnets: An AI agent automatically creates hundreds of accounts with synthetic identities, bypasses CAPTCHAs and verifications through its own AI chess game.

Reward Abuse: The agent automatically explores weaknesses in your loyalty and rewards programs, identifies patterns, and exploits them to earn and sell maximum points.

Return Fraud Automation: Automatic creation of orders, confirmation of receipt via deepfakes or bots, returns without actual goods.

Payment Cycling: The agent conducts repeated small transactions to test your thresholds before executing a major fraud.

A real-world example: a major European retailer discovered that an agentic AI system had conducted small transactions over several weeks to map his fraud thresholds before finally executing a 50,000 EUR fraud. The system was so sophisticated that it automatically adapted its actions to detected security measures.

The DACH Scenario: Why Austria Is Particularly at Risk

German-speaking e-commerce is a preferred target for several reasons:

High Purchasing Power: The DACH region has high average order values, making fraud more lucrative.

Fragmented Security Landscape: Different shops use different (often outdated) fraud prevention systems. Fraudsters can exploit these differences.

Regulatory Complexity: PSD2 and other regulations create loopholes that fraudsters exploit (e.g., exempt transactions).

Document Fraud: Fake Austrian and German documents are relatively easy to create and validate against real addresses, perfect for synthetic identity fraud.

How to Protect Yourself: A Multi-Layer Approach

1. Biometric Authentication – But Done Right

Not all biometric systems are created equal. You need anti-deepfake technologies:

Liveness Detection: Verify that the video selfie comes from a real, living person, not from a video or deepfake. Modern liveness technology uses sub-surface vein analysis and pulse detection, which are hard to forge.

Document Verification with AI: Check documents (ID, driver's license) not just for authenticity, but also for consistency with other signals. An AI system can detect anomalies in document photos that humans miss.

Behavioral Biometrics: Use behavioral patterns — how the user types, navigates, scrolls. Deepfakes and synthetic identities don't have the same behavioral profile.

2. Synthetic Identity Detection

Consistency Checks: Validate that name, address, phone, email, and other identifying information are consistent. A name that doesn't exist in any public directory is suspicious.

Velocity & Pattern Analysis: A synthetic identity that opens too many accounts too quickly or makes high-value orders too quickly is a red flag.

Network Analysis: Look not just at the individual, but at his network. If ten "new" customers with different names but the same IP address order, that's suspicious.

3. AI Against AI: Machine Learning for Fraud Detection

You can't manually fight agentic AI. You need your own ML systems that learn to recognize agentic AI behavior:

Anomaly Detection: Train ML models on normal transaction patterns. Agentic AI attacks have a characteristic behavior pattern (automated, fast, repetitive) that differs from normal customers.

Real-Time Scoring: Every transaction should be scored in real-time based on historical data and current patterns.

Adaptive Rules: Your fraud detection must adapt. If fraudsters use new techniques, your system should learn and update rules.

4. Document Verification – Beyond Surface Level

Optical Character Recognition (OCR) with AI: Modern OCR systems can not only recognize text, but also detect anomalous patterns in forged documents.

Biometric Matching: The face in the document should be matched with the live selfie. AI can detect inconsistencies here that manual review misses.

5. Multi-Factor Authentication (MFA) – Implemented Correctly

Out-of-Band Verification: A fraudster with your password can't receive SMS codes to his SIM card. Enforce MFA for sensitive operations.

Push Authentication: Send a push notification to the registered device: "Someone just signed in with your password. Confirm this action." Deepfakes can't say "yes."

Case Study from Practice: An Austrian Retailer Protects Itself

An Austrian fashion retailer with 15 million EUR in annual revenue saw a sudden spike in KYC bypasses. Within two weeks, over 200 suspicious accounts were created with deepfake authentication. After implementing the following measures, the rate dropped by 95%:

• Switch to advanced liveness detection with anti-deepfake technology.

• Implementation of behavioral biometrics during the checkout process.

• Automatic network analysis to identify synthetic identities (multiple accounts with the same IP).

• Real-time machine learning scoring for all transactions.

The Future: Arm Yourself Now

Deepfakes, synthetic identities, and agentic AI aren't future problems — they're here now. While technology to detect them is improving, there's a critical window right now where many shops are vulnerable. If you haven't updated your systems, you'll be a target. Combine these AI fraud prevention mechanisms with the knowledge from our article on fraud prevention in e-commerce to build comprehensive protection. And remember: the best defense isn't catching fraudsters, it's being intelligent enough to prevent them from getting in the first place.

Your fraud prevention systems aren't equipped against modern AI attacks? I help you build a robust, AI-resistant system.

Schedule a Free Security Audit