‘Deepfakes’ ranked as most serious AI crime threat

Fake audio or video content, also known as ‘Deepfakes’, has been ranked as the most worrying use of artificial intelligence (AI) for crime or terrorism.

According to a study, published in the journal Crime Science, AI could be misused in 20 ways to facilitate crime over the next 15 years.

These ways have been ranked in the order of concern, based on the harm they could cause, the potential for criminal profit or gain and how difficult they would be to stop.

The study authors from the University College London said fake content would be difficult to detect and stop, and it could have a variety of aims. It could lead to widespread distrust in audio and visual evidence, causing societal harm.

Five other AI-enabled crimes have also been judged as the matter of high concern. These are using driverless vehicles as weapons, crafting more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for large-scale blackmails and AI-authored fake news.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives,” said study senior author Professor Lewis Griffin.

For the study, the research team compiled 20 AI-enabled crimes from academic papers, news and current affairs reports, fiction and popular culture. They then gathered 31 people with expertise in AI for two days of discussions to rank the severity of the potential crimes.

Crimes of medium concern included the sale of items and services fraudulently labelled as AI, such as security screening and targeted advertising. According to the researchers, these would be easy to achieve, with potentially large profits.

Crimes of low concern included burglar bots (small robots) that were judged to be easy to defeat, for instance through letterbox cages, and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service,” said study’s first author Matthew Caldwell.

“This means criminals may be able to outsource the more challenging aspects of their AI-based crimes,” Caldwell said.

Artificial IntelligenceCyber TerrorismcybercrimeDeepfakesUniversity College London
Comments (0)
Add Comment