A double-edged sword: GenAI vs GenAI

By Adarsh Som and Sayantan Mondal

That the induction of a comprehensive security plan can completely fortify data is a myth, a realisation for businesses across industries as the threat landscape has evolved swiftly with emerging technologies. The fact remains that Generative AI (GenAI) is the core reason behind the increasing complexity of cyberattacks. Industry veterans, however, portray GenAI as a double-edged sword that though sophisticates the threat vectors, is also the key to defending against complex cyber threats.

A peek into the past

GenAI has a perceived notion associated with labelling it as a novel technology, however, if we care enough to take a peek into the past, the history unveils some unpopular facts. The origin of GenAI as a technology can be dated back to 1966 when Joseph Weizenbaum demonstrated the first-ever chatbot Eliza. Weisenbaum was a German computer scientist and a professor at the Massachusetts Institute of Technology (MIT). The MIT News that broke out the news of Weisenbaum’s demise in 2008 also highlighted how he grew sceptical of AI after programming the first chatbot himself.

The second chatbot took about six years to see the light of day and in 1972 PARRY came to life. The chatbot was designed to echo the thinking patterns of a paranoid schizophrenia patient. After a long halt, 2014 witnessed the first advanced version of GenAI. Ian Goodfellow, an American computer scientist, with his colleagues introduced GenAI in a study titled ‘Generative Adversarial Nets (GANs)’.

The next part of the story made GenAI the talk of the town. In 2022, OpenAI launched ChatGPT and disrupted business processes and preferences across industries. The viral success of the application made GenAI the most vouched-for technology by business leaders.

Intelligent avatar of cyber threats

After evolving for over a million years, ‘human error’ is still a floating tag that labels anything and everything that goes wrong despite a well-prepared approach. A straight inference is, if the creator struggles with imperfection, how can the creation be perfect? The analogy speaks volumes about loopholes that open a window for misusing technologies. An opportunity that fraudsters and threat vectors leverage in full swing.

To underscore the negative implication of GenAI which is weakening the existing cybersecurity technologies by the day, one must understand the enhancing complexity of attacks and their potential to cause irreparable damage.

Large Language Model (LLM)-powered GenAI models can be misused to fetch illegal, unethical, or private data from across the internet by feeding simple commands to the chatbot. Afterall, the technology is intelligent but not smart. Probably, Weisnbaum would have realised this after Eliza was made operational. Hence, his scepticism!

Not limited to unleashing the “not for public disposal” data, GenAI capabilities artfully launch social engineering attacks. Such an attack disguises an authentic entity building a certain credibility with the victim and prompting him/her to perform certain actions making the victim reveal personal or confidential data. It seems intelligence can fool the smarts at times!

Understanding the depth to which the security is compromised, GenAI models are capable of generating codes for hacking. Moreover, it can also scan software’s codes and identify susceptible areas. Opening another opportunity for the cyber hooligans to plot and launch well-planned attacks.

Besides all this, GenAI takes a step further and demonstrates the ability to generate malware, ransomware, polymorphic viruses, and so on. Therefore, even a common man who has zero knowledge of coding can leverage GenAI for sinister motives.

GenAI vs GenAI

The sub-head has it all. It is where the title of the story is justified. A popular Hindi saying “Loha hi lohe ko kat-ta hai” precisely defines the pressing need for GenAI-based advanced cybersecurity solutions to defend from GenAI-led cyber threats. The vast load of knowledge from the internet, that the technology feeds on, can be leveraged to tackle the evolving threat landscape.

How? Ratan Jyoti, Chief Information Security Officer (CISO), Ujjivan Small Finance Bank, says, “AI enhances cybersecurity by automating threat detection, response, and prediction and aids in defending against increasingly sophisticated cyber threats. AI-powered systems can analyse vast amounts of data to identify patterns and anomalies, detect and respond to threats in real time and adapt new attack techniques. Also, It can help in detecting insider threats by analysing the frequent patterns and linking it with the new attack patterns.”

LLMs can be put into action to effectively run scans on server logs and identify oddities signalling a probable ongoing attack. Moreover, if provided with access, GenAI models can scrutinise data associated with an entity’s cybersecurity and hand out a detailed report. The functionality can be automated to ensure regular updates on data security status.

Turning its intelligence into a shield, GenAI provides the ability to read through both internal and external data from various sources on the internet including social media, and detect possible attack scenarios while identifying attack patterns as well. In addition, it provides a glimpse of the type of cyber attack, its algorithm, and other associated nuances.

Organisations can formulate training modules using GenAI models to upskill their security specialists. Moreover, simplified versions of such modules can pose an effective learning tool for the workforce and add value to their digital literacy, especially in terms of keeping cyber menaces at bay.

The induction of GenAI in existing IT infrastructure can complement the existing security measures and notify whenever a breach or a cyber attack strikes. In response, it can also deploy adequate action against the attack to prevent further penetration into the system or network.

In a nutshell, GenAI is like an obedient student that learns in real-time and implements all the knowledge to provide insights as and when asked. Avoiding threats may not be in our hands but grooming the GenAI models well is. Feeding the right data to train this artificially intelligent tech with snippets of malicious codes, malware, ransomware, and other such datasets can effectively build its capacity to defend and protect systems, networks, and all the essential data.

However, despite these advances, it has some weaknesses associated with it. Many tactics such as jailbreaking, prompt injection and reverse psychology can be used to manipulate Gen AI technologies and thus posing risks to security protocols.

“GenAI based systems are prone to attacks such as quick injection attacks which may enable circumvention of safety restrictions. Jailbreaking can be used to overcome ethical filters and may result in revealing PII (Personal Identifiable Information) using multi-step jailbreaking prompt strategy. Similarly, reverse psychology can be used by understanding the underlying mechanisms, and by predicting the AI systems response, attackers can exploit and manipulate the system to produce responses that are contrary to its ethical programming,” mentions Srikanth V, Director AI, Ola Electric.

Srikanth also highlights that It is important to build multi-disciplinary teams comprising Cyber Security Analysts, Research Scientists, Engineers to come forward to address both aspects – threats and opportunities to build robust security systems. “Development of comprehensive metrics such as CyberMetric and benchmark datasets are needed to overcome weaknesses of these LLMs / Gen AI technologies,” he adds.

Awareness call! Need for a proactive approach

However, every coin has two sides and so does every technology. Every technology indeed presents new avenues for vulnerabilities, and the key lies in maintaining strict discipline in identifying and addressing these vulnerabilities. This calls for the strict application of IT ethos in organisational setups to ensure no misuse of technologies, especially intelligent ones.

“It is crucial to continuously test your APIs and applications, relentlessly seeking out any potential vulnerabilities and ensuring they are addressed promptly. This proactive approach is vital in safeguarding your platform against potential threats,” says Sunil Sapra, Co-founder & Chief Growth Officer, Eventus Security.

The Government of India has proactively addressed the grave importance of cybersecurity and recently rolled out the much-awaited Digital Personal Data Protection Act 2023. The Act though takes into consideration data protection and data privacy laying emphasis on the ‘consent of the owner’, but it does not draw the spotlight on GenAI that can make or break the existing cyber fortifications. Hence, there is a dire need for strong regulations and control measures guarding the application of GenAI models.

Sikhin Tanu Shaw, CIO, Canfin Homes Ltd. highlights, “If you ask any CIO, CTO, or CISO, they will express genuine concern about the negative applications of AI which is being used by threat vectors and attackers. This is where government regulations can play a significant role in curbing the misuse of AI and addressing cybercrime. However, it is essential to follow a structured approach to security, starting with Vulnerability Assessment and Penetration Testing (VAPT), diligent patching of applications and infrastructure etc. Establishing an AI powered Security Operations Centre (SOC) equipped with a robust Network Operations Centre (NOC), Security Information and Event Management (SIEM) solution, Security Orchestration Automation and Response (SOAR) tools, User and entity behaviour analytics (UEBA) and Data Loss Prevention (DLP) measures can help detect anomalies in the system and user behaviour”

As we look towards the future, GenAI’s role in cybersecurity will likely expand. Autonomous defense systems and self-healing networks powered by GenAI hold immense promise. However, the potential for a global “GenAI arms race” necessitates international collaboration. Responsible development, robust regulations, and a commitment to ethical use are crucial for harnessing the power of GenAI for good.

Cybersecuritydata protectionGenerative AI
Comments (0)
Add Comment