• Sign In
  • Get Started
HomeBlogAI in Scams and Social Engineering

AI in Scams and Social Engineering

We’re taking a hard look at some of the most prominent AI-related cyber threats in 2024 and sharing a few pointers to avoid getting duped.
Media - Image - Shera Brady

by Shera Brady

March 27, 2024
AI in Social Engineering and Scams-1
Contents
3 Most Prominent AI-Related Cyber Threats in 2024What is Social Engineering?Common AI ScamsHow To Avoid Getting Duped

While most of us are using AI to keep us organized, generate ideas, and even plan our next vacation, there are a few malicious actors using the new technology to scam others. With the ability to mimic human behavior and generate convincing content, AI-powered tools have been used to take the cyberattack game to the next level. This dark underbelly of AI poses significant threats to individuals, organizations, and societies worldwide.

In fact, security researchers Ben Nassi, Stav Cohen and Ron Bitton created a generative AI worm in a test environment to show the dangers of these large language models. The worm can move between systems, stealing data and deploying malware. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” said Nassi.

We’re taking a hard look at some of the most prominent AI-related cyber threats in 2024 and sharing a few pointers to avoid getting duped.

There is no shortage of ways cybercriminals can use AI to swindle people. Understanding the pervasiveness of these threats and their consequences is the first step to avoiding them.

Here are the most common AI cyber threats we’ll likely see more of this year:

  • Phishing: Initial research by SoSafe shows that AI-written phishing emails were opened by 78% of humans, with 21% going on to click on malicious content within. What’s more, 65% of humans were tricked into revealing personal information in input fields on the linked websites by AI-generated emails.

  • Voice Cloning: McAfee found that a quarter of adults had experienced some kind of AI voice scam, with 1 in 10 targeted personally and 15% saying it happened to someone they know. 77% of victims said they had lost money as a result.

  • Deepfakes: In the U.S., instances of deepfakes used for fraud increased from 0.2% to 2.6% between 2022 and Q1 2023, according to Content Detector.

What is Social Engineering?

Social engineering is a tactic employed by cybercriminals to manipulate individuals into divulging confidential information or performing actions that compromise security. The secret? Instead of relying on technical vulnerabilities, these threats choose to exploit human psychology to gain unauthorized access to systems or data.

AI algorithms empower these attackers by automating and personalizing their conversations with their targets, increasing their efficiency and sophistication—ultimately extracting sensitive information or delivering malicious payloads. 

AI can also analyze vast amounts of data to identify potential victims and craft highly tailored social engineering messages, enhancing the likelihood of success for these fraudulent schemes.

Common AI Scams

From large-scale data breaches facilitated by AI-powered phishing campaigns to the highly technical manipulation of videos through AI-generated deepfakes, these incidents underscore the magnitude of the threat posed by malicious exploitation of AI technology.

Here are some of the most common ways fraudsters use AI for scams:

Sophisticated Phishing Attacks

Phishing has been a go-to for cyberattackers for years, and it has been costing businesses everywhere millions of dollars each year.

IC3 found that phishing and social engineering scams are not only the most common but among the most costly: Business Email Compromise attacks caused the U.S. a loss of around $2.4 billion across the country.

While some organizations have gotten better at spotting these fraudulent emails, the development of AI has made it easier for scammers to create a real-looking, sophisticated, and personalized email—one that’s nearly impossible to distinguish.

Matt Waxman, SVP and GM for data protection at Veritas Technologies told SecurityWeek “[AI] tools make it easy for attackers to improve their social engineering with AI-generated phishing emails that are much more convincing than those we’ve previously learned to spot.”

Voice Cloning Scams

A survey recently conducted by McAfee found that a quarter of adults have already been targeted by AI voice scams, with the overwhelming majority losing hard-earned money as a result.

Threat actors can now use a text-to-speech AI voice generator to take audio files that include real people’s voices from social media and use them to trick their loved ones into thinking they are in trouble and need to send money immediately.

Even our own editor, Elliot Volkman, was recently impersonated by such a threat. His voice was cloned, the threat actor used an automated response system to call a family member and ask for support after an emergency, and then immediately asked for money. 

The technology today to enable such activities takes only moments, and anyone who has more than a few moments of publicly available audio can fall victim.

Deepfake Scams

There has also been a huge spike in deepfake scams, with the number of cases reported increasing by 1740% in North America in just one year.

In fact, an unwitting employee at a financial institution in Hong Kong was tricked into sending $25 million to cyber attackers who faked an entire video call with him.

He initially received an email from someone posing to be the UK-based CFO, asking for a secret electronic transfer, which he dismissed as fake. But when the CFO and a number of other team members hopped on a call with him, he put aside his doubts and sent the money over.

Everyone on the call was fake, but according to this employee, they all looked and sounded exactly how he had recognized them.

Of course, the most common subject of these deepfake videos tend to be celebrities, as their image is easier to find and has a bigger impact. Fake videos of Taylor Swift have surfaced—one where she’s seen offering free Le Creuset cookware, and one where she’s holding pro-Trump signs at the Grammy’s. The millions of views these videos have gotten are especially alarming and have spurred conversation about how to spot and scrub them from the internet.

The impact of AI-driven scams extends beyond immediate financial losses, profoundly affecting individuals, businesses, and society as a whole. Businesses face operational disruptions, loss of customer trust, and legal liabilities. 

How To Avoid Getting Duped

Education and awareness serve as the primary defenses against AI-driven scams and social engineering. By educating businesses and their employees about the tactics used by cybercriminals, people can be better equipped to identify and thwart fraudulent schemes. 

Increased transparency with your security teams can make all the difference when it comes to avoiding these scams. If an email looks suspicious, report it. If you aren’t completely sure who you’re talking to, try to reach them a different way. Just knowing that these scams are out there and they are highly believable gives you a leg up against bad actors.

Training programs and informational campaigns can empower users to recognize red flags, verify the authenticity of communications, and adopt best practices for online security.

The Federal Trade Commission (FTC) has also launched the Voice Cloning Challenge to encourage the development of multidisciplinary solutions—from products to policies to procedures—aimed at protecting consumers from AI-enabled voice cloning harms including fraud and the broader misuse of biometric data and creative content.

As AI technology continues to advance, its impact on society's safety and security will undoubtedly evolve. While AI offers tremendous benefits, its misuse for scams and social engineering underscores the importance of ethical AI development and responsible deployment. Curious about the current state of AI regulations? Check out this blog post. 

Prioritizing transparency, accountability, and privacy protection in AI systems helps to mitigate potential risks and ensure the safety and security of businesses, individuals, and communities everywhere. To stay up-to-date on the latest compliance and security news, subscribe to Trusted, our bi-weekly newsletter.

Trusted Newsletter
Resources for you
Existing Regulations that Impact the Use of A

AI Regulations: Where We’re At and Where We’re Going

Trust & Privacy by Design Drata-s AI Philosophy (1)

Trust and Privacy by Design: Drata's AI Philosophy

How AI impacts privacy

The AI Dilemma: Harnessing the Power of AI While Protecting Privacy

AI Best Practices

Essential AI Security Practices Your Organization Should Know

Media - Image - Shera Brady
Shera Brady
Related Resources
Existing Regulations that Impact the Use of A

AI Regulations: Where We’re At and Where We’re Going

Trust & Privacy by Design Drata-s AI Philosophy (1)

Trust and Privacy by Design: Drata's AI Philosophy

How AI impacts privacy

The AI Dilemma: Harnessing the Power of AI While Protecting Privacy

AI Best Practices

Essential AI Security Practices Your Organization Should Know