Agentic AI and the Illusion of Autonomy
Reading about AI everyday one sees examples that vary from the interesting to amazing to outright scary. For a topic that has been restricted to nerdy mathematicians for over half a century, the last three years have seen a rapid progression from the initial ChatGPT release to GenAI and competing Large Language Models (LLMs) to now AI agents that can seemingly wrap all this up in autonomous intelligence.
So AI can now think for itself? It would certainly seem that way if it can book flights, answers emails, schedule meetings, provide medical and even emotional advice – not bad for something that doesn’t need chai-coffee breaks. So, lets dig a little deeper
Agentic AI
If we start with decoding the jargon, “Agentic AI” is just a way of saying an AI system can do tasks for you, on its own, once you give it instructions. While that may not sound too impressive, the key is that they can autonomously adjust their actions to achieve specific goals.
For example, if the instruction is “Book me the cheapest flight to Goa this Friday, aisle seat only” , the AI agent, much like a travel agent actually, will search flights, compare prices, maybe redeem my miles, and send me the ticket — all while I sip my filter coffee. So the leap from one instruction, one outcome to an intelligent comprehensive understanding of the instruction given, far more complicated set of steps to achieve that outcome, steps themselves varying based on the situation and ask, and learnability over time to keep improving the outcome. If that’s not autonomous intelligence, I don’t know what is.
Appearances can be deceptive
When we engage with an AI agent, see it chatting to us politely, picking options, reminding us about meetings and travel time from the airport to add considerations that one hadn’t thought of, and certainly hadn’t instructed, we start to believe it’s doing some independent thinking.
Stating the obvious, because it can be so easy to forget, that the AI agents don’t actually ‘care’ about my beach holiday or my middle seat nightmares. Everything from the polite language to the complex scenario planning is basically a set of instructions, lots of number crunching and rule following, using patterns learnt from absolutely massive piles of data — all prepared by human programmers.
As sophistication, and accuracy, increases the real danger is that we start trusting AI too much. At a 70-80% hit rate, I am happy for the help but always double checking. A 95% hit rate, being right 19 out of 20 times, can mean the 20th ‘miss’ is an irritant of an AI shopper ordering a blue kurta instead of a red one or a corporate disaster of an AI document generator drafting a legal contract but missing a key detail that is only discovered after the dotted line has been signed.
Trust, but verify
We need to remember that these agents are tools — very clever ones, but still tools. Indeed the better they become the more clever we have to become to catch the mistakes that inevitably will be made.
For instance, many HR teams now use AI agents to shortlist résumés. You can argue this is smart work not hard work – more of an HR pattern not an autonomous agent example but still bear with me. Going from 500 applicants to a short list of 10 is not easy. But if data used for training the AI has biases, these will persist in the results thrown up. Catching human bias takes some doing, but when AI just copies it at lightning speed the challenge becomes tougher.
Addressing the hype
The mind-boggling pace of innovation leaves us amazed and rightly so. We seem to have a minor innovation a day, a major innovation a month, and one breakthrough a quarter. So the hype and hyperbole are very natural. Of course, “fully autonomous AI” sounds cool and can make us look cool (or so we think) as we become visionaries over coffee or dinner – so lot of incentive to add to the hype
That said, AI agents will become part of everyday life for many. My AI agent talking to the airline’s agent for ticket booking or the bank’s bot to get a good deal on a loan – these scenarios will become real sooner than we may realise. It will save us time and headaches — but Alexa is not family, and the AI agent is not your super trustworthy best friend.
At the end of the day, real autonomy is ours. AI agents are helpers — faster, tireless helpers. If a “smart” AI agent makes a mess, I have to tell myself not to panic. Mistakes are a part of life. Or maybe the mess was an efficient execution of an incorrect instruction, thus providing comfort that my stupidity rules over the agentic autonomy. The day that changes, we are in trouble!