When algorithms turn against us: AI in the hands of cybercriminals

By Mandar Patil, Founding Member at Cyble

AI has been created to make technologies better, more quickly and more efficiently. Use of AI can be found in a range of applications from personalized recommendations to fraud prevention and detection, and its goal is to protect users and create a better digital experience. With the advancement and wide spread adoption of AI technologies, there has been an increase in the ability of cybercriminals to exploit system vulnerabilities on a larger scale; in other words, cybercriminals are employing AI as a “weapon” against systems for malice and gain.

A new era of cybercrime has emerged where criminal activity will no longer be manual, slow, or easy to detect, but will instead be based on the use of intelligent, adaptive, and frighteningly human-like capabilities.

Smarter Attacks, Fewer Effort Barriers
As cybercrime becomes more sophisticated, the barrier to entering the crime has been lowered significantly. Historically, the entry barrier consisted of needing a certain level of technical skill and expertise in order to write malicious code, develop and send phishing emails or compromise an organization’s systems. However, with the advancement of generative models and the use of automated tools for carrying out cyberattacks, cybercriminals now have the ability to launch very targeted campaigns with little more than a few clicks or taps.

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited.

Deepfakes and Digital Deception
One of the most dangerous uses of AI in cybercrime is related to the creation and use of deep fakes. Deep fakes can be created using voice cloning technology and the generation of synthetic video to impersonate people such as a company’s CEO, family member, or an elected official. There have already been instances of cybercriminals successfully tricking employees into transferring large sums of money based on fraudulent voice commands from what appeared to be senior management.

Psychological versus Technical Manipulation
An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment.

Cryptomorphic Malware And Intelligent Breaches
AI-driven malware can adapt to its surrounding environment through machine learning. These applications will learn from their environment, not merely following a pre-configured instruction set; these programs will alter their execution based upon how well they are detected by existing security measures. AI-driven malware may even detect when they are being monitored (by an AV product), modify their execution timeline and/or utilize legitimate processes to act like a normal application.

The use of these adaptive strategies results in a decline in traditional signature-based detection methods of traditional cyber-defences. It is no longer a case of cyber-defenders identifying known threats, rather they are responding to threats that are constantly evolving and have the ability to learn in real-time.

Data as new weapon
AI provides tools for cybercriminals to maximize the value of their stolen data. Large data files, instead of being sold, are now broken down, processed and utilized in a tactical manner. AI can help to identify the most valuable targets in a data set, and predict the behavioural patterns of each target to determine the most effective way to generate either financial gain, influence, or access through the use of data. This transformation turns an isolated data breach event into a long-term exploitation cycle. Victims remain vulnerable for an extended timeframe following the original data breach incident.

The Ethical and Security Issues
AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. The collaboration between governments, technology companies and security professionals has gone from an option to an absolute necessity.

A Neutral Technology
The fact that AI is not itself a “bad guy” (it is neutral) but made “bad” by those who use it (the intent). Algorithms in the hands of a cybercriminal become instruments of manipulation, deception and disruption. The future will not include any fear of AI, but rather govern AI using responsible practices so that intelligence is used for protection rather than exploitation.

The growth of algorithms creates another question; no longer will there be any doubt as to whether or not they will be misused, but rather how ready we will be when they are misused.

Comments (0)
Add Comment