Express Computer
Home  »  Exclusives  »  Invisible AI risks are the next frontier of cybersecurity: Sharda Tickoo, Trend Micro

Invisible AI risks are the next frontier of cybersecurity: Sharda Tickoo, Trend Micro

0 0

As Indian enterprises race to operationalise AI, the conversation around cybersecurity is undergoing a fundamental reset. What was once about protecting infrastructure, networks, and data is now about something far more fragile and far more consequential, intelligence itself. For Sharda Tickoo, Country Manager for India & SAARC at Trend Micro, this shift marks the most profound transformation cybersecurity has seen in decades.

“When AI systems begin making autonomous decisions that shape customer experiences, financial outcomes, and operational continuity, the risk is no longer limited to data exposure,” says Sharda Tickoo. “The real danger is corrupted intelligence, models that appear to function normally but quietly undermine trust and business outcomes at scale.”

In traditional IT environments, breaches were tangible. Data was stolen, systems were taken offline, alerts were triggered. In AI-driven enterprises, compromise is often silent. A poisoned model can keep delivering outputs, passing accuracy checks, and influencing thousands or even millions of decisions before anyone realises something is wrong. By then, the damage is systemic.

From protecting systems to protecting trust

According to Tickoo, AI fundamentally changes the role of cybersecurity. Traditional perimeter defence and data protection models are no longer sufficient in environments where data is constantly in motion, models are continuously learning, and decisions are made at machine speed.

“With AI, we must secure the entire lifecycle, from training datasets and model parameters to inference endpoints and decision logic,” she explains. “A poisoned model making credit or fraud decisions can cause large-scale harm long before it is detected. That shifts cybersecurity from asset protection to trust preservation.”

This challenge is particularly acute in India, where AI adoption is accelerating across BFSI, healthcare, manufacturing, and digital platforms. Enterprises are under growing scrutiny from regulators, customers, and partners to demonstrate algorithmic accountability and responsible data governance. Security teams are no longer just defending against attackers; they are being asked to prove that AI-driven decisions are explainable, auditable, and free from manipulation.

“Trust becomes the most valuable currency in AI-led enterprises,” Tickoo says. “Security has to ensure that intelligence remains defensible, not just available.”

Why legacy cloud security fails AI workloads

The problem, she argues, is that most cloud security architectures were never designed for this reality. Built for static workloads with predictable behaviour, traditional tools struggle to keep up with the fluid, GPU-driven environments that power modern AI.

“AI workloads are ephemeral by nature, spinning up and shutting down at machine speed,” Tickoo notes. “Security policies designed for persistent infrastructure simply cannot adapt fast enough.”

Worse still, many conventional security tools compete directly with AI workloads for compute resources. In GPU-intensive environments, security scanning can slow down training and inference, creating pressure to relax controls at precisely the moment when protection is most critical.

But the deepest blind spot lies in visibility. Legacy tools understand data at rest and data in transit. AI introduces a third state, which is, data in use, actively processed in GPU memory. Attacks targeting this layer, from GPU memory exploitation to malicious container images and compromised AI libraries, often go completely unnoticed.

“These are invisible risks,” Tickoo says. “Model poisoning, adversarial manipulation, and model extraction attacks don’t look like traditional breaches, so legacy controls simply don’t see them.”

The silent danger of model poisoning

Among the emerging threats, Tickoo believes model poisoning is the most underestimated, and the most dangerous. Unlike API abuse or perimeter attacks, its impact unfolds quietly over time.

“Model poisoning doesn’t trigger alarms,” she explains. “Attackers introduce subtle manipulations into training data that skew outcomes while preserving overall accuracy. From a traditional security perspective, nothing appears wrong.”

A fraud detection system, for instance, may continue to perform well in tests while systematically ignoring specific malicious patterns. Months later, when financial losses surface, tracing the root cause becomes extraordinarily difficult.

“By the time you discover a poisoned model, the damage has already compounded across every decision it made,” Tickoo warns. “That’s what makes it so dangerous.”

When intelligence itself is compromised

The consequences of compromised model integrity go far beyond those of conventional breaches. A data breach exposes historical information. A compromised AI model corrupts future decisions.

“You’re no longer dealing with a one-time incident,” Tickoo says. “You’re dealing with ongoing, automated harm. Every inference becomes a force multiplier for damage.”

In high-stakes deployments, medical diagnostics, credit scoring, autonomous systems, or supply chain optimisation, the implications can be severe. Regulatory penalties, legal liability, and reputational damage can dwarf the impact of traditional cyber incidents. Remediation is equally complex, requiring enterprises to identify poisoned data, retrain models, reassess past decisions, and determine whether business operations can continue to rely on AI at all.

“This is not a simple patch-and-move-on scenario,” she adds. “It can take months to restore confidence in compromised intelligence.”

Where enterprises most often fall short

Despite the risks, many organisations still treat AI security as an extension of existing cloud controls. Tickoo sees the most common failures at the transition points, where experimentation becomes production.

“The handoff between data science teams and production environments is where attackers find opportunity,” she says. Training pipelines often ingest data from diverse, unverified sources, creating fertile ground for poisoning attacks. Container images and open-source AI frameworks are pulled into environments without rigorous scanning, while model registries frequently lack strong access controls or integrity checks.

Even after deployment, many organisations assume that pre-production validation is enough. In reality, AI models must be continuously monitored for anomalous behaviour, adversarial inputs, and abuse patterns.

“AI security doesn’t end at deployment,” Tickoo emphasises. “That assumption is one of the biggest risks enterprises take today.”

What security by design really means for AI

For Indian enterprises scaling AI on GPU-powered cloud platforms, the path forward is not about slowing innovation but embedding security into the fabric of AI operations.

“Security by design means protection moves at the speed of AI,” Tickoo says. “It’s built into ML pipelines, not added as a gate at the end.”

This starts with securing machine learning CI/CD pipelines, where data integrity checks, container scanning, dependency validation, and policy enforcement run in parallel with model development. Purpose-built, cloud-native security platforms designed for AI workloads play a critical role here, offering visibility without competing for GPU resources.

“You can’t afford security tools that slow down training or inference,” she notes. “Protection has to be agentless, intelligent, and workload-aware.”

Equally important is governance. Clear policies around data provenance, model validation, runtime monitoring, and incident response allow teams to move quickly within well-defined guardrails rather than treating security as a bottleneck.

“The goal is not to choose between speed and safety,” Tickoo concludes. “It’s to make trust a built-in outcome of AI innovation. Because in the AI era, once trust is lost, no amount of scale can bring it back.”

Leave A Reply

Your email address will not be published.