Synthetic threats: When machines learn to scam humans

Mr. Dipesh Ranjan,SVP, ANZ & Europe, GSI,Cyble

Artificial intelligence is evolving all the time and as it evolves, so too are the threats to people from artificial intelligence. One of the most concerning new threats is the rise of “synthetic scams,” which are scams created using artificial intelligence tools to imitate human behaviours with a striking degree of accuracy. For example, you may have heard of “deepfakes”- videos of people saying or doing things they never actually said or did. Scammers can use these types of technology to deceive victims and there are no longer any language barriers, time zones, or technical skills preventing scammers from committing their crimes.

Deepfakes and Digital Impersonation

Deepfake technology has become one of the key enablers of the newest generation of scams. Scammers can use deepfake technology and artificial intelligence to clone a person’s voice, reproduce their facial expressions, and create highly realistic videos of people saying or doing things that have never actually occurred. As a result, there is an increase in impersonation scams where attackers are impersonating the CEO of a company, family members, or even a government official. In some cases, employees have been manipulated into transferring large amounts of money after receiving apparent legitimate instructions from a senior executive only to later realise the instruction was a product of artificial intelligence.

Hyper-Personalized Phishing Attacks

In the past, victims could usually easily identify phishing emails because they lacked proper grammar or were generic in nature. However, as artificial intelligence has become more advanced, phishing has also become much more targeted and highly personalized. Artificial intelligence can use social media profiles, public data about the person and their online behaviour to create messages that appear realistic, relevant and contextually appropriate. By mimicking a person’s writing style, referencing actual events in their life, and using emotional triggers, it has become very difficult to identify phishing emails.

The Threat of AI Scams

One of the largest threats posed by AI scams is that they can execute many times more significantly than traditional scams. Automated systems will allow companies to perform thousands of customized attacks all at once, continually adapting and changing at the same time based on how victims respond (i.e., if someone has fallen victim to a scam). This ability to increase the success rate will devastate traditional fraud detection companies as well. The speed at which these new AI scam attacks occur also leaves very little opportunity for humans to stop or prevent these types of attacks, meaning that being ahead of the game when it comes to preventing these attacks before they can take place is even more important than being able to identify and address these types of attacks.

Building Resilience in a Synthetic World

To combat synthetic threats, a multi-prong approach is needed. Organizations should invest in next-generation cybersecurity tools that can recognize anomalous behaviors in communications. However, it is also vital to promote awareness and education. Employees as well as consumers should all be trained on how to properly interrogate unusual requests, verify the identity of people requesting sensitive information through multiple channels of communication, and to be wary of communications that solicit a strong emotional response. Governments and technology companies also need to work together to create and implement regulations that will help define the proper use of AI (i.e., ethical boundaries).

Trust in the Age of Machines

As machines become more sophisticated in their ability to imitate humans, trust as we once knew it is increasingly being challenged. The clear line between what is real and what is not is rapidly dissolving, forcing individuals and organizations to reevaluate how they authenticate transactions. Therefore, in this new environment, skepticism will not just be prudent, it will be imperative. The fight against AI scams is not all about technology, it is about maintaining trust in an ever-growing automated society.

Comments (0)
Add Comment