Greetings, Guardians of the Digital Frontier,
A study by the University College London has unveiled a new kind of cyber threat.
Conducted by the Dawes Centre for Future Crime at UCL, the study identified 20 applications of AI and related technologies that could be exploited for criminal activity now or in the future. The crimes were ranked according to their level of concern as a low, medium or high concern in relation to:
- The harm they could cause
- Criminal profit (financial, terror, harm or reputational)
- Achievability of the crime and,
- Difficulty to defeat
This article shows you the six most concerning crimes and another five of medium concern.
Buckle up. The future might be even more challenging than we thought.
High Concern AI crimes
(Generated with AI)
1. Audio/Visual Impersonation
Artificial intelligence has blurred the line between fabricated and real. It’s now capable of realistically impersonating children or relatives over video calls to gain access to funds, initiate phone conversations to access secure systems, or even create fake videos of public figures to influence public opinion.
Deep learning has made these impersonations complex and realistic, while defense algorithms struggle to keep pace with such sophistication.
This threat poses a high level of harm and generates cash for criminals with practically no risk.
2. Driverless Vehicles as Weapons
Autonomous AI-controlled vehicles, now in the early stages of revolutionizing transportation, also present a grim prospect: vehicular terrorism without the need for human drivers. Coordinated attacks and single-perpetrator assaults become feasible, demanding innovative countermeasures.
Artificial intelligence could expand vehicular terrorism by eliminating the need for a driver. Therefore, a single perpetrator could coordinate many vehicles and perform multiple attacks.
The potential for harm and achievability are high, and criminal profit is moderate. It is difficult to defeat because a driverless vehicle is as susceptible to barriers and traffic restrictions as regular vehicles.
3. Tailored Phishing:
In just a matter of months, the art of deception is being perfected by artificial intelligence. AI-enhanced phishing delves into personalization, crafting messages that mirror genuine communication. These tailored attacks are difficult to discern from authentic correspondence, amplifying the success rates of phishing campaigns.
Unparalelled computing capacity makes “experimenting” exponentially easier for the threat actors at practically no cost.
This threat is rated as difficult to defeat, as AI-generated phishing messages are nearly indistinguishable from real ones. Although the harm is moderate, the profit to criminals is high because the cost of sending the messages is nearly nothing.
4. Disrupting AI-Controlled Systems:
This threat can unleash chaos in the digital realm. The number of systems controlled by artificial intelligence is too large: government, military, commerce, and households.
Many possible criminal and terror scenarios arise from targeted disruption of such systems, from causing widespread power failures to traffic gridlock and breakdown of food logistics.
Key targets include systems responsible for public safety and security, and systems overseeing financial transactions.
Although the achievability is low because technical safeguards are in place, potential harmful effects and profit are significant. Additionally, defeating these attacks demands a profound understanding of intricate infrastructures.
5. Large-Scale Blackmail:
Artificial intelligence harvests vast datasets (emails, hard drives, social media, browser history), enabling large-scale blackmail by identifying vulnerabilities in numerous targets at a lower cost. Tailored, meticulously crafted threat messages can be terrifying for victims.
While it’s still hard for criminals to collect large data, the difficulty of defeat lies in victims' reluctance to confront exposure, making them susceptible to coercion. This tactic generates significant profit for criminals even if the tactic causes only moderate harm.
6. AI-Authored Fake News Manipulating Reality
Fake news, bolstered by artificial intelligence, inundates the digital landscape, displacing genuine information. It generates diverse content versions, giving the impression of the same news coming from multiple sources, amplifying credibility and impact.
Criminal profit was ranked as low because the value of financial gains is elusive, although there is potential for market manipulation. It is easily achievable, as the technology already exists, and is difficult to defeat.
Medium concern crimes
Here are five of the crimes considered of medium concern:
Misuse of Military Robots
The infiltration of military hardware into criminal or terrorist hands, including autonomous battlefield robots, is potent. Since the capabilities of militaries worldwide are kept in secrecy, it is hard to determine the threat level.
The calculated manipulation of machine learning training to introduce subtle biases can be done as either an end in itself or designed for later exploitation. Trusted data sources, though resilient, are not immune. An example: the deliberate desensitization of automated X-ray threat detectors or similar systems underscores the devious nature of this crime.
For a while, cyber-attacks were either highly tailored to a particular target or unrefined but heavily automated, relying on numbers.
AI blends sophistication with automation, making it possible for criminals to launch complex attacks in large numbers.
Tricking Face Recognition
AI-powered face recognition systems are primarily used to prove identity. This technology is being tested by the police for tracking suspects and by customs authorities. In China, it is used extensively.
These systems are also attractive to criminals.
This threat involves the manipulation of financial or stock markets via targeted, high-frequency patterns of trades to damage competitors, currencies, or the economic system as a whole.
At this point, the crime requires sophistication that most criminals do not posses. But as tools become more sophisticated, the consequences could be catastrophic.
The rise of the evil machines.
Awareness is our biggest strength. When employees are well-educated, engaged, and vigilant, the risk of one or one thousand phishing emails is much less of a factor.
Cybersecurity information must be interesting, relevant, actionable, and readily available. Employees must be reminded of the technology lurking on the web so they know to slow down and not react instinctively to urgent messages.
Don’t wait a whole semester or a year to get your organization caught up on cybersecurity, and don’t make it too complicated or technical for them!
Talk to them on their level of understanding — which is much different than an IT team’s! Cybersecurity awareness should be educative and empowering.
Job 1: Foster good communication and trust between cybersecurity experts and non-technical personnel.
Educate yourself and your teams about emerging AI technologies and potential misuse.
Too busy to educate employees while doing a thousand other important tasks?
Aware Force is here. We deliver terrific content year-round and keep cybersecurity on the minds of your workforce.
We’re standing by to show you innovative ways organizations use Aware Force customized-for-you cybersecurity content.