AI Deepfakes: A New Identity Threat for Blue Collar Industries

AI deepfakes are exploding across farming, construction and manufacturing. Shae McBride breaks down what this new wave of identity fraud means for blue-collar businesses—and how to stay protected.

2025-Nov-Tue
stop sign with text on it that says deepfakes

Introduction

Fraudsters have found a new way to trick businesses: deepfakes, or fake photos, videos, and voices generated by AI, now drive up to 1 in 5 biometric identity fraud attempts in banking, construction, and farming[1][2]. In the past year alone, there's been a 35% jump in digital document forgeries and a 40% increase in so-called "injection attacks"—where fake identities are fed directly into software to pass security checks without ever using a camera[3][4].

What Does This Mean for Us?

  1. Blue-collar industries are not immune. Farmers, contractors, and manufacturers are targeted because remote transactions and equipment sales are growing faster than ever, making it easier for scammers to hide behind fake profiles[5][6].
  2. Deepfakes are smart. Criminals use AI to make identities that don't exist, swap faces in videos, and even create fake moving selfies to bypass security[1][3].
  3. Real harm is happening: The cryptocurrency sector faces deepfake fraud in 60% of biometric attempts, while digital banks see rates of 22%[1].
  4. Most professionals feel unprepared: Fewer than 1 in 10 anti-fraud professionals feel adequately prepared to combat these threats[7].

The Scale of the Problem

Recent data reveals the dramatic acceleration of this threat[1][3]:

  • Deepfaked selfies increased 58% in 2025
  • Total deepfake files surged from 500,000 in 2023 to 8 million in 2025
  • AI-driven crypto scams alone surged 456% between May 2024 and April 2025
  • 77% of anti-fraud professionals have witnessed clear acceleration in deepfake social engineering over the past two years

How Can You Protect Your Business?

1. Stay Informed

Train your team to spot unusual online behavior, suspicious emails, and social media messages. If a deal looks too good to be true, it might be a scam[3][4].

2. Double-Check Identities

Call new vendors or buyers using numbers found on official company websites, not what's provided in emails. Verify identity before sharing sensitive information or sending payments[4].

3. Update Your Verification Tools

Use multi-factor authentication and biometric ID checks that update regularly to catch deepfakes. If your system is old, ask your tech provider about newer anti-fraud features that can detect fake images and videos[1][3][4].

4. Keep Personal Data Safe

Protect passwords, business documents, and employee information. Never share login details or financial info in unsecure emails or texts[4].

5. Report Suspicious Activity

If you notice anything strange—like fake profiles, odd payment requests, or photos that don't match—report it to your IT manager and financial partner right away[1][4].

Understanding the Attack Methods

Fraudsters employ three primary deepfake techniques[1]:

  • Synthetic identities: AI-generated faces that don't correspond to real people
  • Face swaps: Replacing one person's face with another in video
  • Animated selfies: Using AI to add movement to static photos

The report warns that virtual camera injections represent the most common attack method, often paired with device emulation techniques to trick verification software into believing fraudulent attempts are legitimate users[3].

Industry Impact

While physical counterfeits still account for 47% of document fraud attempts, the accessibility of generative AI and modern editing tools has driven rapid growth in digital forgeries[3]. Human detection capabilities lag dangerously behind the technology, with studies indicating people correctly identify high-quality deepfakes only around one in four times[3].

"As detection improves, fraud rings evolve, becoming faster, more organized and commercially driven," said Simon Horswell, senior fraud specialist manager at Entrust. "Identity is now the front line, and protecting it with trusted, verified identity across the customer lifecycle is essential to staying ahead of adaptive threats"[1].

Bottom Line

AI deepfakes can fool even experienced professionals. Make security a daily habit—from the farm to the jobsite to the factory floor. At Catalyst Communications Network, we want to help you stay ahead by sharing the latest data and practical tips. Together, we can protect our people, our businesses, and our communities.

References

[1] Entrust. (2025, November 18). 2026 Identity Fraud Report: The Evolution of AI-Driven Fraud. Analysis of over one billion identity verifications across 195 countries, September 2024-September 2025.

[2] TRM Labs. (2025). Cryptocurrency Fraud Report. AI-driven crypto scams analysis, May 2024-April 2025.

[3] Veriff. (2025). Identity Fraud Report 2025: Digital Forgeries and Injection Attacks. Annual fraud trends analysis.

[4] Feedzai. (2025). AI Fraud Trends Report 2025. Financial services fraud analysis and prevention strategies.

[5] Agricultural Retailers Association. (2025). Digital Transaction Security in Agriculture. Industry security trends report.

[6] Associated General Contractors of America. (2025). Construction Industry Fraud Prevention Guide. Blue-collar industry security analysis.

[7] Association of Certified Fraud Examiners & SAS. (2025, November 16). Anti-Fraud Professional Preparedness Survey. Survey of fraud prevention professionals regarding AI threats.

Shae McBride

Shae McBride (CEO / Founder)