Express Computer
Home  »  Interviews  »  Why infrastructure readiness is the new battleground in AI

Why infrastructure readiness is the new battleground in AI

0 12

As businesses transition from AI pilots to widespread adoption, the debate is now unequivocally shifting from models to infrastructure. Infrastructure readiness, robustness, and trust are fast becoming the new differentiators that will help determine whether AI projects ultimately deliver business value.

In a special interaction, Genius Wong, EVP – Core & Next Gen Connectivity Services & CTO; Bhaskar Gorti, EVP – Cloud & Cybersecurity Services; and Vaibhav Dutta, Vice President and Global Head – Cybersecurity Products & Services at Tata Communications, share their insights on what it really means for businesses to be AI-ready and, more importantly, AI-resilient.

Infrastructure moves from background to centre stage
For much of the last two years, enterprise conversations around AI have been dominated by compute performance, GPUs, and model scale. However, as organisations attempt to move AI from pilots to production, foundational gaps in infrastructure are becoming increasingly visible.

“Enterprises are beginning to recognise infrastructure as a holistic system rather than a collection of discrete components. High-performance networking, low-latency architectures, and intelligent traffic orchestration are becoming critical as AI workloads introduce new patterns of data movement,” says Wong, adding that training and inference traffic, for instance, demand different performance characteristics, requiring networks to evolve beyond traditional connectivity and become intelligent orchestration layers.

Balancing performance with security in the AI era
While performance remains essential, enterprise deployments are forcing a recalibration of priorities. Bhaskar Gorti emphasises that in production environments, security must evolve alongside speed. “Historically, organisations often traded off one for the other, highly performant environments at the expense of security controls, or secure environments that struggled to scale,” he adds.

Today, enterprises are seeking architectures where performance and protection coexist, particularly as regulatory scrutiny intensifies across sectors such as BFSI and healthcare. AI readiness is no longer about choosing between performance and governance; instead, infrastructure must embed both from the outset to ensure compliant and seamless data flows.

The multi-cloud reality and the persistence of blind spots
Another key challenge is the fragmentation created by multi-cloud environments. While cloud adoption has accelerated innovation, it has also introduced operational silos. Each hyperscaler ecosystem operates differently, often requiring separate skills, tools, and integration strategies.

Gorti points out, “Enterprises are rarely operating within a single cloud environment. Workloads are distributed across multiple public clouds, private infrastructure, and SaaS platforms, making orchestration increasingly complex. These blind spots are rarely intentional; they emerge when teams solve immediate business challenges without a unified architectural framework.”

As AI adoption accelerates, these inconsistencies become more visible, reinforcing the need for stronger foundational design.

A paradigm shift: Moving compute closer to data
A recurring theme, the speakers highlight, is the changing relationship between data and compute. For decades, enterprise IT strategies focused on moving data towards centralised compute environments. AI is reversing that paradigm.

Wong observes that organisations are increasingly bringing intelligence closer to where data resides, including edge environments, to meet latency and performance requirements. This shift represents a broader transformation in how enterprises design networks, infrastructure, and application architectures, as distributed intelligence becomes essential for scaling AI responsibly.

Regulated industries face a dual mandate
From a cybersecurity perspective, Vaibhav Dutta highlights the growing importance of guardrails as regulated industries accelerate cloud and AI adoption. “While sectors such as banking and healthcare were initially cautious, the pressure to innovate is driving greater openness to distributed architectures,” he mentions.

The challenge lies in maintaining data sovereignty, privacy, and governance while enabling the speed demanded by AI-driven environments. Guardrails around APIs, privacy controls, and risk governance frameworks are becoming integral to infrastructure strategies, ensuring that innovation does not compromise compliance.

The evolving mandate for CIOs: From connectivity to intelligent fabric
Looking ahead, the speakers agree that CIO priorities are undergoing a structural shift. Networks are evolving from traditional connectivity frameworks into intelligent fabrics capable of supporting compute, data, and users in a unified ecosystem.

Future-ready organisations will need to embed AI-specific governance models, rethink risk management structures, and design infrastructure that aligns with regional data residency and regulatory requirements. Planning horizons are shortening, and infrastructure decisions made today will determine how effectively enterprises scale AI in the years ahead.

“Beyond technology,  cultural change remains critical. Collaboration, openness to experimentation, and ecosystem partnerships will play a decisive role in enabling organisations to move from experimentation to sustained innovation,” points out Gorti.

From AI ambition to AI resilience
The discussion reflects a broader industry transition, from an era defined by AI ambition to one focused on operational resilience. Enterprises are recognising that long-term success will depend less on how advanced an AI model is and more on whether the underlying infrastructure can scale securely, adapt to distributed environments, and support continuous innovation without compromising trust.

Leave A Reply

Your email address will not be published.