In this exclusive interaction with Express Computer, Par Botes, Vice President of AI Infrastructure at Pure Storage, shares his insights on the key enterprise AI trends, the evolution of agentic AI, simplifying developer experience, and the real-world applications of AI in healthcare and finance. He also discusses innovations such as their Key-Value Accelerator (KVA) and the future of data infrastructure in the age of AI.
From your vantage point at Pure Storage, what are the key trends currently shaping enterprise AI adoption, and how are organisations rethinking their data infrastructure to keep up?
That’s a very good question. If we go back a little, until recently, the big focus in AI was on building large models, these LLMs, by a handful of very large technology companies.
In the last year or year and a half, however, we’ve seen this evolve towards enterprise AI. Companies are now building large-scale inference for their own deployments using their own data. This takes shape in two major forms: agentic AI and enterprise-specific AI models.
This trend is accelerating tremendously, though it’s still in its early stages. I think it will take another two to three years before such deployments become widespread. But we’re already witnessing a significant shift where enterprises are moving from experimentation to scaled AI adoption using their unique data and infrastructure.
You mentioned agentic AI. It’s gaining a lot of traction in the tech world. How would you define agentic AI in an enterprise context, and what infrastructure shifts do organisations need to support such autonomous, goal-driven systems at scale?
That’s an excellent question. To deploy AI effectively at scale, I always tell people that the first thing they need is a clear definition of success. Too many organisations start building without defining what success or failure looks like. Without that clarity, it’s hard to measure whether your AI system is performing as intended.
Take a conversational model like ChatGPT then; how do you judge if its response is “good”? It’s difficult unless you’ve defined the success parameters.
Once an enterprise has defined that, the next step is to build strong data governance around it. This governance creates the guardrails necessary to ensure AI behaves in line with business goals and ethical boundaries.
So, for enterprises preparing to adopt agentic AI, executives should primarily focus on three things: defining what they want AI to do, determining how to measure its success, and building a governance model that ensures accountability and transparency.
Simplicity for developers is often overlooked, especially in the AI context. How is Pure Storage helping reduce complexity in AI infrastructure for developers, particularly those working with AI-assisted coding tools or large-scale models?
We’re trying to make things radically simpler. The first step, as I mentioned earlier, is effective data management. The second is recognising that AI’s data access patterns are unpredictable—there’s no fixed “hot” or “cold” data anymore.
In traditional systems, you could classify data into tiers like frequently accessed (hot) and rarely accessed (cold). But in AI, all data can become relevant at any moment. AI models continuously access large datasets, and performance consistency across the entire dataset becomes vital.
That’s why we’re seeing widespread deployments of Flash technology, even among hyperscalers. AI is essentially forcing this transformation. Enterprises will soon follow suit because consistent, high-performance access across all data is key.
If you can deliver that consistent performance and rapid data response, frameworks and developer tools will automatically become simpler to use. It’s about removing the friction so developers can focus on innovation rather than managing data bottlenecks.
Finance and healthcare are among the most regulated yet innovative sectors. Could you share any real-world examples of how AI backed by robust infrastructure is driving measurable impact in these areas?
Healthcare is a massive and fascinating space for AI. We support customers in multiple areas—drug discovery, clinical diagnostics, and digital health records. Personally, I’m more familiar with the discovery side, which tends to involve more advanced AI systems.
We work with companies that are using AI to discover new drugs and develop new methods of disease diagnosis. These models analyse molecular structures and predict how different molecules can lead to potential new medicines—it’s quite impressive.
Another fascinating use case is translational science. It involves combining different data types like MRI scans and CT scans and using AI to uncover new insights that were previously invisible. This cross-comparison of data could transform how we detect diseases early and improve patient outcomes dramatically.
Given the sensitivity of healthcare data, ethical AI becomes crucial. What suggestions or cautions do you think this sector should follow?
The current frameworks around patient confidentiality are, in my view, robust and well-structured. As long as we maintain and enforce those existing mechanisms without eroding them, the ethical foundation remains strong.
This is largely an area of policy and governance, and I don’t expect radical changes in the near future. The key lies in preserving what already works well, especially when it comes to patient data privacy and informed consent.
The AI landscape is evolving at an extraordinary pace. Looking ahead, what does the future hold for AI infrastructure a year or so from now?
It’s fascinating to think about how rapidly things are evolving. Just three years ago, building and deploying AI systems was extremely complicated and required deep research expertise. Today, thanks to the evolution of frameworks, it’s becoming much easier and more accessible.
Let me share a story. During a conference in Singapore, I met a team from Mongolia, which is a small country population-wise, with its own language and the Cyrillic alphabet. There was no LLM available for their language. Yet, they managed to build their own local LLM for digital banking in their native language.
This would have been impossible three or four years ago. But now, with easier-to-use frameworks, even smaller nations can build language-specific models. It’s an example of how AI has become more approachable, inclusive, and localised.
At Pure Storage, we’ve recently announced a breakthrough innovation called the Key Value Accelerator (KVA-). Pure Key-Value Accelerator (KVA) is a protocol-agnostic key-value caching solution. Combining Pure KVA with FlashBlade delivers faster inference, higher GPU efficiency, and consistent performance across AI environments. It’s designed to make AI inference up to 20 times faster by accelerating the memory used by LLMs. It also enables longer memory retention for extended conversations.
What’s exciting is that this innovation was a global collaboration: the front-end work was done in Europe, mathematical components in California, and the entire backend was built right here in India. It’s a testament to India’s exceptional engineering talent and the speed at which teams can now collaborate globally to build world-class AI solutions.
Beyond finance and healthcare, which other sectors do you believe will be revolutionised by AI in the coming years?
There are four sectors that I believe will be at the forefront: finance, healthcare, telecommunications, and government, especially the national security segment. These four are already seeing significant adoption of AI and will continue to lead in innovation and investment.
Many people are also talking about quantum computing. Do you think it will play a major role in the AI landscape soon?
Quantum computing holds tremendous promise, but it’s still a bit farther out. I experimented with it recently, and it’s quite fascinating, but we’re still in the early stages.
That said, there’s one important thing for computer scientists and businesses to consider today, especially those working in banking or any data-sensitive industry. Start preparing your encryption methods to be quantum-resistant. Once quantum computing becomes practical, traditional encryption could be vulnerable. So it’s better to be proactive about it now.
As demands for speed, scale, and sustainability grow, what innovations should we expect in AI infrastructure to support the next generation of intelligent applications?
The next few years will bring tremendous improvements in ease of deployment and use. Today, deploying AI still requires significant skill, but that’s changing quickly. We’ll see the rise of smaller, more efficient models that can run on modest infrastructure, making AI accessible to far more organisations.
Training models will also become simpler. It’s already common for enterprises to fine-tune models on their own data, but it’ll become even easier.
On the sustainability front, we’ll continue to see advances in energy efficiency. AI workloads today are somewhat energy-intensive, but rapid innovations are improving efficiency.
Globally, we’re also facing a shortage of data centre space and energy capacity, and India is no exception. Some countries have even imposed temporary moratoriums on new data centres due to energy constraints. However, competitiveness eventually forces these restrictions to be lifted.
For Indian enterprises, my advice would be to plan infrastructure three to five years ahead and secure enough energy and data centre capacity today to support the AI workloads you want to run in the future. Because by the time you need it, it might be too late to acquire.
Before we conclude, could you share more about the innovations your team in India has been working on?
I’m incredibly proud of what our Indian team has accomplished. They played a leading role in developing the Key Value Accelerator (KVA) and the RDMA-enhanced S3 protocol for both world-first innovations from Pure Storage.
RDMA allows ultra-fast remote data access over a network, and we successfully integrated it into S3, something no one else had done before. Both of these innovations were 100% built in India, and they’ve dramatically improved the performance of AI workloads.
The pace, skill, and enthusiasm of our engineers here are truly world-class. I’m even considering spending a few months in India to work more closely with the team on upcoming projects, it’s an environment that fosters creativity, collaboration, and speed like few others.