By Aniket Amdekar, General Manager, Cyber Defence Education, Great Learning
The rapid evolution of Generative AI has ushered in a new era of cyber threats: hyper-personalised phishing campaigns, deepfake-based impersonations, and misinformation that can bypass even the most advanced detection systems. In this budding threat environment, the traditional approach to cybersecurity awareness, solely done through periodic training and checklist-based compliance, is no longer enough. Organisations must rethink cyber resilience through the lens of human intelligence augmented by AI literacy. The concept of the ‘Human Firewall 2.0’ represents the transformation where a workforce is equipped to recognise, respond to, and mitigate AI-powered digital deception in real time.
The Shifting Threat Landscape
Cybersecurity has always been a race between attackers and defenders. But with generative AI, the pace and sophistication of attacks have accelerated dramatically. According to IBM’s 2025 X-Force Threat Intelligence Index, phishing attacks have surged in sophistication, with threat actors increasingly using generative AI to craft convincing emails and deepfakes. The report notes an 84% rise in phishing emails delivering infostealers and a 180% increase in weekly phishing volume compared to 2023, underscoring the growing effectiveness of AI-powered deception. These emails mimic tone, context, and even visual branding with uncanny precision, making them indistinguishable from legitimate communication.
Deepfake technology, once a novelty, is now weaponised for impersonation scams. In 2024, a UK-based company lost over $25 million due to a deepfake video call impersonating its CEO. These incidents highlight how generative AI has blurred the lines between real and fake, making traditional detection methods obsolete.
From Awareness to Adaptability
Earlier cybersecurity programs focused on awareness, teaching employees to spot suspicious emails or avoid malicious links. However, the grammar mistakes and odd formatting that once gave away phishing attempts are no longer present. Generative AI tools like ChatGPT, DALL·E, and Synthesia can craft convincing narratives, visuals, and voices that pass as authentic.
To counter this, employees must evolve from awareness to adaptability. This means continuously learning and recalibrating against dynamic AI threats. Cybersecurity is now no longer just an IT issue but a human issue. Every employee, regardless of role, must be equipped to question authenticity and spot anomalies.
Building AI-Critical Thinking
Empowering employees with a foundational understanding of generative AI tools, including how they create, manipulate, and mimic content, can significantly enhance early detection. AI literacy should be embedded into onboarding, training, and leadership development programs.
AI-critical thinking involves asking questions like:
- Does this message align with known communication patterns?
- Is the urgency or emotional tone unusually high?
- Are there subtle inconsistencies in voice, image, or context?
This mindset shift is essential. Gartner Predicts that 17% of Total Cyberattacks Will Involve Generative AI by 2027. The ability to critically evaluate digital content will be a core competency in the workplace.
Human-AI Collaboration in Cyber Defense
While AI can flag anomalies, human discernment is vital for contextual judgment. For example, an AI system might detect a suspicious login from an unusual location, but only a human can determine if it’s a legitimate travel scenario or a breach.
Collaborative security models that pair AI analytics with informed human review represent the next frontier. Security Operation Centers (SOCs) are increasingly adopting AI-powered threat intelligence platforms that provide real-time alerts, but these systems are only as effective as the humans interpreting them.
Microsoft’s 2025 Digital Defense Report underscores that as adversaries increasingly leverage AI, defenders must do the same, using AI to accelerate threat detection, automate remediation, and enhance decision-making. While AI can dramatically compress response times, the report emphasises that human expertise remains essential for contextual judgment and strategic resilience. This synergy is the essence of Human Firewall 2.0.
Culture of Continuous Learning and Ethical AI
Cyber resilience depends on an organisational culture that prioritises ongoing upskilling, scenario-based simulations, and ethical AI training. Employees should be encouraged to engage in simulated phishing exercises, deepfake detection challenges, and AI ethics workshops.
Creating safe spaces for employees to report suspected AI-led threats without fear of error fosters trust and vigilance. Psychological safety is key; when employees feel supported, they are more likely to act on their instincts and report anomalies.
Ethical AI usage must also be part of the conversation. As employees use generative AI tools for productivity, they must understand the risks of data leakage, model bias, and unintended consequences. Responsible AI usage is the new age responsibility for all employees across all levels.
Future Ahead
As generative AI transforms both innovation and infiltration, the human element in cybersecurity has never been more critical. In this process, the Human Firewall 2.0 will be an adaptive, intelligent network of informed professionals who understand how AI can deceive and how to counteract it.
By investing in AI literacy, fostering collaboration between humans and AI, and embedding continuous learning into corporate culture, organisations can build a resilient, self-aware defense posture. In an age where digital deception is increasing in sophistication and personalization, empowered employees remain the ultimate line of defense.