As AI reshapes the enterprise, security architecture can’t afford to lag behind, it has to evolve in real time: Suvabrata Sinha, Zscaler

As generative AI becomes integral to enterprise workflows, organisations are racing to balance innovation with security. In this exclusive interview with Express Computer, Suvabrata Sinha, CISO-in-Residence, India at Zscaler, shares critical insights on how AI is transforming the threat landscape, why Zero Trust is essential in an AI-first world, and what enterprises must do to stay ahead of rapidly evolving cyber risks. Drawing from the latest ThreatLabz 2025 AI Security Report, he outlines the challenges posed by Shadow AI, deepfakes, and automated phishing and offers a clear path forward for security leaders navigating this new era.

What is driving the rapid adoption of generative AI across industries, and how are cybersecurity risks evolving as a result?

Generative AI has gone from buzzword to backbone, becoming foundational to enterprise productivity. It’s reshaping how we code, communicate, and make decisions. This momentum is reflected in our latest ThreatLabz 2025 AI Security Report, which recorded a 3,000% year-over-year spike in enterprise AI/ML transactions. Tools like ChatGPT, Grammarly, and Co-pilot are just not among the most widely used, but interestingly they are also among the most blocked applications, highlighting growing concerns over data security.

Threat actors are now using AI to automate attacks, build deepfakes, and create hyper-personalised social engineering campaigns that are harder to detect and easier to execute. Organisations need to adapt with equal urgency. That means applying Zero Trust principles to AI interactions, continuously assessing AI risk, and building AI-aware controls into their security stack. As AI reshapes the enterprise, security architecture can’t afford to lag behind, it has to evolve at the same time.

How is the proliferation of Shadow AI and unsanctioned open-source models increasing the attack surface within enterprises?

We’re in a phase where anyone can plug in an AI tool and get things done faster.  Employees are embracing AI tools like ChatGPT, DeepSeek, and other open-source models to boost productivity. But problems (and risks) originate when they’re done without the IT team’s knowledge or broader organisational oversight. This unsanctioned usage creates blind spots in data governance, leading to potential data leaks, compliance violations, and an expanded attack surface. Threat actors exploit this trend, leveraging AI-powered phishing, fake AI platforms, and deepfake-driven fraud to infiltrate corporate networks. The lack of visibility in these unsanctioned tools makes it easier for attackers to bypass traditional security measures.

To mitigate these risks, organisations need visibility, control, and enforcement. That means monitoring AI activity, isolating access to unsanctioned tools, and using inline DLP to prevent sensitive data from being exposed or entered into AI prompts. With the right guardrails in place, enterprises can harness AI’s potential without compromising security.

With the rise of ‘AI Builders,’ agentic platforms, and generative tools, how cybercriminals are leveraging them to conduct advanced phishing, deepfake-based social engineering, and automation-driven attacks, and how should enterprises respond?

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners.

What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. Coupled with continuous risk assessment and adjusting default settings that prioritise security over convenience, these steps help reduce the attack surface.

 What role can AI play in helping CISOs and security teams detect and neutralise synthetic content, while strengthening real-time incident response and threat hunting efforts?

We believe leveraging AI is the best defense against AI-powered attacks. As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread.

But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. This also means implementing robust human oversight for AI-driven processes, ensuring that key business decisions are not made without appropriate checks. Enterprises should also adopt a secure AI product lifecycle, where security is embedded throughout the entire product lifecycle, from development to deployment. And most importantly, all AI-generated content, especially anything public-facing should undergo human review to catch manipulation and misinformation that automation might miss.

Given India’s 36.45% share of APAC AI traffic, what does this dominance mean for innovation and where does it increase the nation’s vulnerability to AI-enabled cyber threats?

India leads in APAC AI/ML traffic (with 36.45%), and second globally is more than a statistic. It’s a signal that the country is emerging as a serious AI innovation hub. Across sectors like finance, insurance, and manufacturing, businesses are adopting AI at a scale to boost efficiency, improve customer experience, and drive growth.

Open-source and generative AI tools are everywhere. Shadow AI is growing. Deepfake scams are getting more convincing. With agentic systems, AI acts with little oversight are expanding the risk surface exponentially. So, while India races ahead in adoption, we can’t afford to let security fall behind. Enterprises need to double down on Zero Trust, build visibility into AI usage, and bake AI-powered detection into their defenses. Because in this race, speed only matters if you stay safe.

AICybersecurityDeepfakeDeepSeekGenAIZscaler
Comments (0)
Add Comment