Express Computer
Home  »  Exclusives  »  Most AI strategies will collapse without infrastructure discipline: Sesh Tirumala, CIO, Western Digital

Most AI strategies will collapse without infrastructure discipline: Sesh Tirumala, CIO, Western Digital

0 5

At a time when enterprises are rushing to scale AI, the real bottleneck is not compute—it is the foundation beneath it. Storage, data integrity, and infrastructure discipline remain the most overlooked yet decisive factors in determining whether AI delivers value or collapses under its own ambition.

In this interaction, Sesh Tirumala, Chief Information Officer at Western Digital, brings a practitioner’s lens to the conversation. With over three decades of experience and a track record of executing large-scale, zero-downtime transformations across global operations, he cuts through the hype to focus on what actually enables AI at scale.

His perspective is clear: without a strategically designed, resilient, and scalable storage architecture, AI initiatives will remain stuck in pilots. For India’s fast-moving enterprises, the risk is not lack of investment—but investing on a weak foundation.

In this interaction with Express Computer, Tirumala outlines where AI strategies are breaking down, what true readiness looks like, and why infrastructure discipline—not experimentation—will separate success from failure.

Some edited excerpts:

Where do you see most AI strategies collapsing — technology choices, data quality, governance, or leadership alignment?

Most AI strategies don’t fail because of technology choices; they fail because of misalignment and lack of discipline around outcomes.

AI fails when leadership alignment, operating clarity and business outcomes are not defined upfront. Without clear success metrics tied to revenue, productivity, risk reduction, or customer and employee experience, organisations fall into isolated pilots and fragmented experimentation rather than scalable value creation.

A second hurdle is data quality and governance. If your data isn’t trusted, connected, and stewarded end-to-end, the model quality won’t matter. AI simply amplifies the foundation you give it, whether it is good or bad.

In India, especially, where enterprises are scaling rapidly and digitising at pace, the temptation is to move fast. But without data maturity and governance guardrails, speed becomes a drag. AI must sit on a trusted data backbone and clear decision rights.

When you evaluate AI readiness, what are the top five infrastructure capabilities that must exist?

When I evaluate AI readiness, I don’t start with models or tools. I start with the foundation that allows AI to operate reliably across the business.

Below are the five infrastructure capabilities that must exist before AI delivers enterprise value:

1. Security and identity baked in: Zero-trust access, strong identity and access management, encryption, and segmentation across on-premise, cloud, and SaaS are not optional as AI operates at scale and evolving regulations cannot treat security as an afterthought. Trust is the prerequisite for scales

2. Resilience and recoverability: Tested backups, disaster recovery protocols, and operational continuity are non-negotiable when it comes to factories, supply chains, and digital services that do not pause for model experiments

3. Observability for the full stack: Organisations need end-to-end visibility across data pipelines, applications, and infrastructure to manage performance, cost, and risk. Without this visibility, organisations cannot diagnose model drift, identify bottlenecks, or control cloud spending as AI workloads scale

4. AI compute and model runtime platform: Organisations need a scalable environment to train, fine-tune, and run models reliably. This includes access to compute, standardized runtime environments, and the ability to operationalize models consistently across use cases. Without this layer, AI remains in experimentation and cannot scale across the enterprise.

5. Data, Knowledge, and AI Execution Layer: A strong data foundation ensures enterprise data is clean, connected, and trusted across systems. If data isn’t trusted and traceable across the organisation, AI models simply industrialize inconsistency.On top of that, a knowledge graph organizes this data into a clear map of how the business actually works, linking customers, products, suppliers, and processes so AI understands relationships, not just data points.

This context feeds into the AI model and runtime layer, where standardized environments, APIs, and MLOps or LLMOps workflows allow models to move from pilots into production and scale across the enterprise. As AI adoption grows, including across India’s rapidly expanding digital economy and data center ecosystem, infrastructure discipline becomes the real differentiator between experimentation and enterprise-scale deployment.

Alongside all of this is organisational readiness. Even with the right infrastructure, AI won’t deliver value unless teams understand how to use it and leaders are willing to redesign work around it. Preparing people to work alongside AI is just as important as preparing technology.

How do you modernize infrastructure in always-on environments without operational disruption? What must be the key guardrails?

We operate with a simple mandate, which is performance and modernization driven. You cannot transform if you compromise uptime, revenue enablement, cost discipline, or supply chain continuity. When operational excellence is table stakes, our guardrails remain very straightforward.

Business-first cutovers with defined success metrics and rollback plans: Every modernization decision starts with the business outcome, not the technology preference. We establish clear success criteria before any change window opens, and we architect rollback capabilities as rigorously as we design the forward path

Tight change discipline with clear go/no-go criteria: Changes advance only when predetermined conditions are met, not because a project timeline demands it. Technical validation, business stakeholder signoff, support readiness, and monitoring infrastructure must all be confirmed before cutover

Simplify instead of replicating: Retire technical debt and reduce complexity wherever possible. Modernization isn’t about introducing complexity into new environments. It’s an opportunity to eliminate redundant systems, consolidate architectures, and reduce operational burden

Velocity matters. But velocity is more than speed. It is disciplined execution with clarity. When we separated systems and consolidated ERP platforms, we moved fast because objectives were clear, ownership was defined, and success criteria were unambiguous. Teams knew exactly what they were accountable for, what success looked like, and where decision rights resided

That same discipline applies to AI infrastructure. Organisations rushing to deploy AI without establishing these guardrails discover that models trained on unstable data pipelines, deployed without rollback plans, or implemented without clear business ownership create more risk than value.

How must the CIO role evolve in an AI-first enterprise?

In an AI-first enterprise, the CIO’s role expands from being a technology leader to a business transformation leader.

I think about this evolution across three horizons: run the enterprise reliably, digitize and automate workflows, and ultimately deliver an intelligence-driven company.

First, run the enterprise reliably. Operational excellence remains the foundation. AI cannot scale on unstable systems, fragmented data, or unclear ownership. This means resilient infrastructure, trusted data, strong governance, and clear decision rights, ensuring the business can move with velocity without introducing risk.

Second, digitize and automate workflows. This is where everyday AI and functional AI play a role, freeing up capacity and addressing inefficiencies within functions. But the goal is not isolated automation. It is simplifying how work gets done and creating consistency across processes so the organisation can operate more effectively end-to-end.

Third, and most importantly, deliver an intelligence-driven company. This requires shifting from functional optimization to value-stream thinking, removing friction across operations, supply chain, engineering, finance, and beyond, and aligning work to the customer or end user. AI becomes a way to connect decisions, data, and workflows end to end, not just optimize them in silos.

Across all three horizons, the CIO’s role is to align technology, data, processes, and people around clear business outcomes. The focus must move beyond experimentation to execution, ensuring AI investments are tied to measurable impact across revenue, productivity, risk, and experience.

To enable this, the CIO must own the full data and AI lifecycle. Building the right foundation, trusted data, secure platforms, and governance, is what allows AI to scale responsibly and consistently.

Ultimately, the role of the CIO is to turn AI from a set of capabilities into a system of execution. That means creating the conditions where the organisation can operate with clarity, adapt continuously, and convert AI into real, sustained business outcomes.

What hidden costs of AI do boards typically underestimate?

One of the hidden costs boards often underestimate is the organisational change required to make AI useful.

AI isn’t just a technology investment. It requires new skills, new workflows, and sometimes entirely new ways of making decisions. That means investing in training, redesigning processes, and helping people understand how AI fits into their daily work.

Boards often focus on the cost of the technology itself, models, infrastructure, and licenses. But the larger investment is preparing the organisation to use it well. Without that work, AI tends to remain in pilots rather than delivering meaningful business impact.

Another major factor is the token consumption explosion. As AI usage scales, the cost of inference and model interaction grows rapidly and can become a significant operational expense if not actively managed.

In practice, the largest investments tend to occur in data readiness, workflow integration, operationalization, governance, and change management, not just in the models themselves.

Many of the largest costs also sit in the foundation and long-term lifecycle around AI. Preparing and governing data across systems, maintaining infrastructure, monitoring models over time, and ensuring security, compliance, and workforce readiness all require sustained investment. Without those elements in place, even well-built AI models struggle to deliver reliable business value.

How do you ensure AI models are secure, compliant, and auditable in regulated environments?

Ensuring AI models are secure, compliant, and auditable in regulated environments starts with a security-first design approach. Controls need to be embedded across the entire stack, from data to identity and access, including least-privilege access, encryption, segmentation, and strong policy enforcement across hybrid environments.

Governance also needs to be operational by maintaining clear data lineage, that traces every input back to its source, proper model documentation, approval workflows, and continuous monitoring so organisations can explain what happened, why it happened, and who authorized it.

Finally, architecture decisions matter. Maintaining portability where possible, validating vendors carefully, and enforcing strict guardrails around sensitive data helps organisations balance innovation with regulatory compliance.

What is one uncomfortable truth about AI transformation that most leaders are not willing to admit?

One uncomfortable truth about AI transformation is that it forces leaders to rethink how work actually gets done, and in doing so, it exposes the problems that already exist in the organisation.

Many companies approach AI as a technology upgrade. In reality, it requires redesigning processes, clarifying decision ownership, and addressing fragmentation in how work flows across the business. That’s uncomfortable, because it means questioning long-standing ways of working.

AI doesn’t transform companies on its own. It amplifies whatever foundation you give it. If processes are slow, data is inconsistent, or responsibilities are unclear, AI will scale those issues rather than fix them.

The organisations that succeed with AI are the ones willing to address those underlying realities, not just deploy new tools.

If you had to pause all AI investments for six months and fix only one foundational issue, what would it be?

If I had to pause AI investments for six months, I would focus on fixing the foundation, both organisational and data.

AI doesn’t fail because of the tools. It fails when people don’t understand how to use it, don’t trust it, or don’t see how it connects to their work. You can deploy the best models available, but if adoption isn’t there, value never materializes.

At the same time, trust in AI is directly tied to trust in data. If the data is fragmented, inconsistent, or poorly governed, the outputs won’t be reliable and people will disengage quickly. That’s why preparing the organisation and fixing the data foundation are not separate efforts. They reinforce each other.

In practice, this means ensuring data is trusted, connected, and well-governed, while also helping teams understand where AI fits into their decisions, workflows, and outcomes. When both are in place, AI moves from experimentation to execution.

Ultimately, AI transformation is a people transformation enabled by a strong data foundation. It requires new skills, a willingness to adapt, and leaders who are ready to rethink how work gets done.

When people trust the data, understand the outcomes, and feel confident using the tools, AI shifts from something interesting to something that actually changes the business.

Leave A Reply

Your email address will not be published.