The potential use cases for Agentic AI are enormous: Babak Hodjat, Chief AI Officer, Cognizant

Artificial intelligence has moved from experimentation to large-scale deployment in just a few years. Few people have had a front-row seat to that evolution like Babak Hodjat, Chief AI Officer at Cognizant. Babak Hodjat leads Cognizant’s AI Research Labs, where teams of researchers and developers are advancing the frontiers of agentic AI and building next-generation capabilities across the company’s platforms and services. A serial Silicon Valley entrepreneur, he previously co-founded Sentient, the company behind the world’s largest distributed AI system, and later launched the world’s first AI-driven hedge fund, Sentient Investment Management.

Earlier in his career, Hodjat co-founded Dejima and invented the agent-oriented technology that eventually powered the intelligent interface behind Apple’s Siri. With more than 50 research publications and 39 issued US patents across artificial intelligence, machine learning, natural language processing and distributed AI, Hodjat has spent decades shaping the technologies that are now defining the modern AI era.

In this conversation with Express Computer, he discusses the rise of agentic AI systems, the balance between trust and risk in AI deployment, the future of engineering roles, and why countries like India could become major exporters of applied AI.

Some edited excerpts:

What are some of the possibilities about AI that excite you from a transformational perspective?

We now have building blocks in AI that are essentially large language models packaged with tools that can connect to other agents. These agents can interact with other agents, creating an entirely new paradigm. You can engineer a whole system of agents that autonomously or semi-autonomously augment a business. This is possible regardless of the application or domain. The potential use cases are enormous. For me, helping build a reliable and safe agentic future is one of the most exciting developments in AI today

What are some of the things that you think we can do today that were not possible maybe two or three years ago?

Let me give you an example. One of the breakthroughs from our lab in San Francisco last year allows large language models and agents to perform reasoning steps reliably for up to a million steps. That simply wasn’t possible until recently.

This capability allows us to build error correction into massively multi-agent systems and reliably support long-chain reasoning. From a technology perspective, that is a major breakthrough.

Another advancement is enabling agents to decide when they can make decisions autonomously and when they should involve a human. We call this human on the loop. In this model, the human is not rubber-stamping every action the agent takes. Instead, the system determines when human intervention is needed.

Agents can now measure their own confidence. If confidence is low, they escalate to a human. If the situation is routine and confidence is high, they proceed autonomously.

A good example is a telecom network. Imagine agents deployed across different network nodes. These agents monitor the network and make decisions related to load balancing, fault detection, and issue resolution. They can also report to higher-level agents or to human operators.

At the speed at which these decisions need to be made, you cannot have a human involved in every step. That defeats the purpose. However, if a situation arises that the system has never encountered before, the agent’s confidence will naturally drop. When that confidence falls below a threshold, the system can escalate the issue.

For example, the agent might say: We have detected an error in the network that was never seen before. We think the solution could be this, but we are not certain. At that point it can surface the issue to a human operator or a rule-based system. This kind of model can apply to many different domains.

We have also seen cases where AI can spiral out of control? What are some of the things that scare you as a technologist?

There is always a balance. On one side is complete lack of trust where organisations refuse to use large language models or require a human in the loop for every step. On the other side is over-trust. Systems like OpenClaw sometimes demonstrate how people anthropomorphize AI to the point where they assume it can do anything autonomously in the real world. That is dangerous.

These systems have limitations. One core limitation is that AI systems today do not easily learn on the job by changing their core model. Another limitation is that they are non-deterministic. They may perform correctly five out of six times, but the sixth time they might produce something completely incorrect.

If you deploy such systems without properly engineering trustworthiness and safeguards, you expose yourself to risk. All it takes is one serious incident for organisations to swing to the other extreme and lose trust completely.

Like any other software system, AI systems must be designed to be safe and reliable. There is a right way and a wrong way to build them.

Due to the impact of AI, what happens to the traditional time-and-material model that the Indian IT services industry has been used to?

For starters, throughput improves dramatically. AI enables faster cycles, greater agility, and higher productivity. But it does not eliminate the need for services. When productivity increases, expectations also rise. History shows this clearly. If we compare today’s productivity to what existed 30 years ago, we are far ahead and yet the volume of work has only grown. The same will happen with AI. The bar will rise.

I have been working in AI for more than 30 years. Expectations about AI have always exceeded reality. There is often a mystical belief that AI will be able to do everything. That leads to extreme conclusions such as saying professional services will disappear or that all jobs will vanish. That is naive.

It reflects a misunderstanding of the boundaries of what AI can actually do.

Think about travel. In the past two weeks I traveled from Egypt to San Francisco, then to San Diego, then to Delhi, and now here.  150 years ago, that level of travel would have been unimaginable.

That is the kind of world we are moving into. As capabilities increase, expectations increase as well.

Now if AI can write code, test, and do almost everything, what becomes the core skill of an IT services developer? What advice would you give to students and engineering colleges that are preparing for the future?

Designing a four-year curriculum today is incredibly challenging because technology is changing so rapidly. Educational institutions need to build adaptability into their programs. We are already moving from waterfall approaches to agile models because the rate of technological change demands it.

People often ask whether it is still worth learning coding if AI can generate code. My answer is absolutely yes. Understanding coding is even more important now. You need to understand what AI is generating in order to validate and safeguard it.

Even if we move away from traditional programming languages, we will still be “coding� when we design how agentic systems interact with each other. Understanding algorithms and system safeguards becomes critical.

We are now dealing with non-deterministic modules agents that behave dynamically. Designing safe and reliable systems using these components becomes a discipline of its own.

New roles will emerge: AI engineers, agent engineers, and multi-agent system engineers. The principles, design patterns, and best practices for these systems are still emerging.

These technologies are only a few years old, so the science is still evolving.

At Cognizant, how are you preparing for this shift? What examples or use cases can you share?

We are training our associates across the organisation and they are not just technologists. Last year we ran a vibe coding hackathon, which set a world record for participation. Interestingly, about 40% of participants had never written code professionally.

That shows we are trying to expose everyone to what is possible with AI and its strengths and its limitations.

We are also encouraging the use of AI and agentic AI across our existing client engagements. Clients increasingly expect this capability, and it is also helping us win new deals.

On the research side, I lead our AI lab in San Francisco. We recently opened another AI lab in Bangalore where we are hiring AI PhD researchers. Given the pace of change, it is important to invent new capabilities rather than only adopt them.

We are also using AI internally. For example, we rolled out an agentic intranet last September. In just three months it handled over 11 million transactions for our associates. The system includes hundreds of agents and has significantly reduced the number of internal support tickets because the agents can resolve many issues automatically. That is just one example. We have many agent-based solutions and demos across different domains.

Given your work in AI, how do you see the evolution of NLP?

In natural language processing, entire academic departments are now shutting down because many classical NLP problems have effectively been solved using large language models.

Tasks such as semantic analysis and machine translation can now be handled by relatively modest-sized language models. That demonstrates the power of modern neural networks. However, there is still a lot of work left to do.

One major challenge is explainability. Large language models are inherently difficult to interpret. Their architecture compresses language patterns in ways that are difficult to reverse-engineer. Some neurons may represent multiple functions, and the models themselves are extremely large. Understanding how they arrive at decisions remains an open problem. Solving explainability may require approaches beyond the transformer architecture.

What do you believe are some of the most underestimated societal implications of AI?

That is an interesting question. I suspect society will oscillate between over-trusting and under-trusting AI.

A generation may grow up treating systems like ChatGPT as absolute sources of truth. That could have societal consequences. On the other hand, some people may view AI as inherently dangerous — believing it will destroy jobs or become uncontrollable. We already see some of these extreme views.

Society today often gravitates toward extremes. Ideally, we should find a middle ground: acknowledge the power of AI, recognize its risks, and learn how to use it responsibly. If we highlight both the benefits and the limitations, we can mitigate the risks of both over-trust and under-trust.

If you had to bet on one structural shift AI will force on enterprises in the next four or five years, what would it be?

The biggest shift will be moving from rigid waterfall models to far more agile operating models. Many large projects today are planned five years in advance with fixed requirements. But technology now changes too quickly for that approach. Midway through a project, new technologies emerge and clients naturally expect them to be incorporated.

Organisations will need to adopt much more flexible and adaptive models to keep up with the pace of innovation.

Agentic AIAIBabak Hodjat
Comments (0)
Add Comment