10 AI-Enhanced Cyber Crimes to be Concerned About

Greetings, Guardians of the Digital Frontier,

A study by the University College London has unveiled a new kind of cyber threat.

Conducted by the Dawes Centre for Future Crime at UCL, the study identified 20 applications of AI and related technologies that could be exploited for criminal activity now or in the future. The crimes were ranked according to their level of concern as a low, medium or high concern in relation to:

This article shows you the six most concerning crimes and another five of medium concern.

Buckle up. The future might be even more challenging than we thought.

High Concern AI crimes

(Generated with AI)

1. Audio/Visual Impersonation

Audio/Visual Impersonation

Artificial intelligence has blurred the line between fabricated and real. It’s now capable of realistically impersonating children or relatives over video calls to gain access to funds, initiate phone conversations to access secure systems, or even create fake videos of public figures to influence public opinion.

Deep learning has made these impersonations complex and realistic, while defense algorithms struggle to keep pace with such sophistication.

This threat poses a high level of harm and generates cash for criminals with practically no risk.

2. Driverless Vehicles as Weapons

Autonomous AI-controlled vehicles, now in the early stages of revolutionizing transportation, also present a grim prospect: vehicular terrorism without the need for human drivers. Coordinated attacks and single-perpetrator assaults become feasible, demanding innovative countermeasures. 

Artificial intelligence could expand vehicular terrorism by eliminating the need for a driver. Therefore, a single perpetrator could coordinate many vehicles and perform multiple attacks.

Artificial intelligence: Driverless Vehicles as Weapons

The potential for harm and achievability are high, and criminal profit is moderate. It is difficult to defeat because a driverless vehicle is as susceptible to barriers and traffic restrictions as regular vehicles.

3. Tailored Phishing: 

Tailored Phishing: generated with AI

In just a matter of months, the art of deception is being perfected by artificial intelligence. AI-enhanced phishing delves into personalization, crafting messages that mirror genuine communication. These tailored attacks are difficult to discern from authentic correspondence, amplifying the success rates of phishing campaigns.

Unparalelled computing capacity makes “experimenting” exponentially easier for the threat actors at practically no cost.

This threat is rated as difficult to defeat, as AI-generated phishing messages are nearly indistinguishable from real ones. Although the harm is moderate, the profit to criminals is high because the cost of sending the messages is nearly nothing.

4. Disrupting AI-Controlled Systems: 

This threat can unleash chaos in the digital realm. The number of systems controlled by artificial intelligence is too large: government, military, commerce, and households.

Many possible criminal and terror scenarios arise from targeted disruption of such systems, from causing widespread power failures to traffic gridlock and breakdown of food logistics.

Key targets include systems responsible for public safety and security, and systems overseeing financial transactions.

Disrupting AI-Controlled Systems

Although the achievability is low because technical safeguards are in place, potential harmful effects and profit are significant. Additionally, defeating these attacks demands a profound understanding of intricate infrastructures.

5. Large-Scale Blackmail: 

Large-Scale Blackmail: generated with ai

Artificial intelligence harvests vast datasets (emails, hard drives, social media, browser history), enabling large-scale blackmail by identifying vulnerabilities in numerous targets at a lower cost. Tailored, meticulously crafted threat messages can be terrifying for victims.

 

While it’s still hard for criminals to collect large data, the difficulty of defeat lies in victims' reluctance to confront exposure, making them susceptible to coercion. This tactic generates significant profit for criminals even if the tactic causes only moderate harm.

6. AI-Authored Fake News Manipulating Reality 

Fake news, bolstered by artificial intelligence, inundates the digital landscape, displacing genuine information. It generates diverse content versions, giving the impression of the same news coming from multiple sources, amplifying credibility and impact.

AI-Authored Fake News Manipulating Reality: Generated with AI

Criminal profit was ranked as low because the value of financial gains is elusive, although there is potential for market manipulation. It is easily achievable, as the technology already exists, and is difficult to defeat.

Medium concern crimes

Here are five of the crimes considered of medium concern:

Misuse of Military Robots

The infiltration of military hardware into criminal or terrorist hands, including autonomous battlefield robots, is potent. Since the capabilities of militaries worldwide are kept in secrecy, it is hard to determine the threat level.

Data Poisoning

The calculated manipulation of machine learning training to introduce subtle biases can be done as either an end in itself or designed for later exploitation. Trusted data sources, though resilient, are not immune. An example: the deliberate desensitization of automated X-ray threat detectors or similar systems underscores the devious nature of this crime. 

Learning-Based Cyber-Attacks

For a while, cyber-attacks were either highly tailored to a particular target or unrefined but heavily automated, relying on numbers.

AI blends sophistication with automation, making it possible for criminals to launch complex attacks in large numbers.

Tricking Face Recognition

AI-powered face recognition systems are primarily used to prove identity. This technology is being tested by the police for tracking suspects and by customs authorities. In China, it is used extensively.

These systems are also attractive to criminals. 

Market bombing

This threat involves the manipulation of financial or stock markets via targeted, high-frequency patterns of trades to damage competitors, currencies, or the economic system as a whole. 

At this point, the crime requires sophistication that most criminals do not posses. But as tools become more sophisticated, the consequences could be catastrophic.

The rise of the evil machines.

Awareness is our biggest strength. When employees are well-educated, engaged, and vigilant, the risk of one or one thousand phishing emails is much less of a factor.

Cybersecurity information must be interesting, relevant, actionable, and readily available. Employees must be reminded of the technology lurking on the web so they know to slow down and not react instinctively to urgent messages.

Don’t wait a whole semester or a year to get your organization caught up on cybersecurity, and don’t make it too complicated or technical for them!

Talk to them on their level of understanding — which is much different than an IT team’s! Cybersecurity awareness should be educative and empowering.

Job 1: Foster good communication and trust between cybersecurity experts and non-technical personnel.

Educate yourself and your teams about emerging AI technologies and potential misuse.

Too busy to educate employees while doing a thousand other important tasks?

Aware Force is here. We deliver terrific content year-round and keep cybersecurity on the minds of your workforce. 

We’re standing by to show you innovative ways organizations use Aware Force customized-for-you cybersecurity content.

Vishing: 3 Examples of How Voice Cloning is Making it Easier Than Ever

The line between reality and fabrication is blurring. The age of AI and deepfakes is moving forward at a spectacular pace. ChatGPT 4’s website gets 1.5 billion visitors a month, and ChatGPT 5, an order of magnitude more capable, will be released in about a year. AI has revolutionized the entertainment industry and opened up new avenues for criminals. One high-profile instance is the use of AI voice cloning for vishing attacks.

An alarming scenario of a vishing attack

A victim receives a frantic phone call: Mom is on a vacation trip, and she’s frantic. Her voice, laced with fear and desperation, informs you that she's been detained in a foreign country. To secure her release, she needs money transferred to an overseas account immediately...

Overwhelmed by shock and concern, you act instinctively. Without questioning the call's authenticity, you proceed with the wire transfer to get her out. It is a vishing scam that costs even technically adept, well-informed victims significant sums of money.

The voice on the other end sounded just like hers. The caller ID might have displayed something like “Police Unit.” Everything seemed so real, so urgent.

This is the dystopian world we’re entering, where everyday scam calls are tailored and engineered like a heist out of Mission: Impossible. And it’s all so easy for perpetrators.

In this article, we outline three cases where voice cloning is used.

(Skip to the cases)

What is AI Voice Cloning?

AI voice cloning, a product of artificial intelligence, enables the creation of realistic replicas of human voices by training a tool (cheap and easy to find) with a few seconds of a voice sample. The app can capture the voice from posts on social media.

AI analyzes as little as three seconds of audio and replicates the unique vocal patterns, intonation, and speech cadence. 

Once trained, it synthesizes new audio content that mimics the target individual's voice, making it virtually indistinguishable from the real person. The similarity is remarkable. 

And what exactly is vishing?

Vishing, a combination of “voice” and “phishing,” is a social engineering attack that utilizes voice calls to deceive individuals into revealing sensitive information or transferring funds. It exploits the power of human empathy by impersonating trusted individuals (family members, bank representatives, or law enforcement officials).

AI voice cloning has elevated vishing attacks to a new level. Scammers can easily replicate voices and seamlessly bypass traditional authentication methods, convincing victims that the conversation comes from a trusted source.

Three Examples of AI Voice Cloning Being Used for Vishing Attacks

The audio quality is jaw-dropping…from a viral video impersonating a celebrity…to a $35 million heist. The implications of this technology are so far-reaching that employees should be reminded often about how it works. 

Case 1: AI Joe Rogan Promotes Libido Booster for Men in Illegal Deepfake Video

In 2023, a deepfake video surfaced online featuring podcaster Joe Rogan endorsing a male libido booster supplement. The video, meticulously crafted using AI voice cloning, was so convincing that it fooled many viewers.

In a podcast episode from October, Rogan addressed the issue of deepfakes and said that he was "disappointed" that his likeness had been used in a fraudulent video. He also warned his listeners to be wary of such videos and to do their research before believing anything they see online.

This incident highlights the potential misuse of AI voice cloning using celebrities to spread misinformation and promote fraudulent products or services.

Case 2: Voice Cloning Heist

vishing example

In a case exposed by Red Goat, a group of cybercriminals employed AI voice cloning to perpetrate a multi-million dollar heist

Although victims’ names were undisclosed, it is known that the Ministry of Justice of the United Arab Emirates submitted a request for assistance from the Criminal Division of the U.S. Department of Justice.

According to court documents, the victim company's branch manager received a phone call from someone claiming to be from the company headquarters. The caller's voice was so similar to a company director's that the branch manager believed the call was legitimate.

Using voice and email, the caller informed the branch manager that the company was about to make an acquisition and that a lawyer named “Martin Zelner” had been authorized to oversee the process.

The manager received multiple emails from Zelner regarding the acquisition, including a letter of authorization from the director to Zelner. As a result of these communications, when Zelner requested that the branch manager transfer $35 million to various accounts as part of the acquisition, the branch manager complied.

Case 3: Scammers Use AI to Mimic Voices of Loved Ones in Distress

In 2023, a particularly disturbing vishing tactic has emerged, utilizing AI to mimic the voices of distressed loved ones. Scammers would contact victims, impersonating their children or other family members, claiming to be in dire need of financial assistance due to an arrest or medical emergency.

These scams are becoming increasingly popular because it is so easy to train AI voice models.

This case underscores the emotional manipulation employed by vishing scammers, preying on victims' vulnerabilities and exploiting their desire to help loved ones in distress.

What now?

Have you noticed an increase in spam and scam calls lately?

You have. According to a 2022 report by the app Truecaller, the number of spam and scam calls in the US has risen steadily over the past years. Last year alone, 31% of all calls received by US residents were from one of the two, up from 18% in 2018.

graph showing the percentage of calls from scams or spams over the year

(alt text: graph showing the increase in scam and spam calls over the past 5 years)

And surprise: 20- and 30-somethings are more likely to be victims

Another finding of the Truecaller report was that younger Americans are more susceptible to scam calls than their older counterparts: in 2021, 41% of Americans aged 18-24 received a scam call, compared to 20% of Americans aged 65 and over.

The Implications of AI Voice Cloning for Vishing Attacks

Factors like the rise of robocalls, the availability of cheap calling technology, and the increasing sophistication of scam techniques powered by AI have made it easier for criminals to adopt this type of attack.

And the implications of AI voice cloning extend far beyond financial losses. The attacks can cause significant emotional distress, shame, damage reputations, and undermine public trust in institutions and individuals.

Awareness and Prevention are Key

As a countermeasure, cybersecurity awareness professionals should continuously educate employees about vishing tactics and empower individuals with the knowledge and tools to protect themselves from these deceptive calls.

6 tips for employees to protect against voice cloning scams

For your employees, here are measures to keep themselves, their employer, and their family members:

How to keep cybersecurity top of mind in an organization

CISOs and cybersecurity teams are under tremendous pressure and must compete for qualified employees. While technical tools exist to test and track employees’ cybersecurity prowess, the key to engaging employees isn’t automation. 

It’s actionable videos, quizzes, infographics, the latest cyber news, and answers to common questions written in a style they can understand and share with their families. 

Aware Force delivers that service year-round. It’s easy to use and cost-effective. And everything we deliver is branded and customized, so all the content comes from your company’s IT team. 

Aware Force generates unsolicited praise from employees and fierce loyalty from our customers. Check out our extensive cyber library and our awesome twice-monthly cybersecurity newsletter — all branded and tailored for you.