Express Computer
Home  »  Guest Blogs  »  Trapped in the bot loop

Trapped in the bot loop

0 1

By Sumeysh Srivastava

A few months ago, a relative was admitted to a hospital in Delhi. I needed to contact their health insurance provider urgently. Instead, I found myself navigating a voice-based automated system that offered menus, asked me to describe my issue, and read out pre-set options. None addressed my situation. There was no pathway to a human agent. After fifteen minutes, I gave up.

This experience is not unusual. Anyone who has tried to dispute a bank charge, get a refund from an airline, or cancel a streaming subscription will recognise it. And it is not accidental. Companies are systematically replacing human support with AI. India’s chatbot market is projected to reach $1.26 billion by 2030, growing at 26% annually. A Reuters investigation found Indian AI startups helping companies cut support staff by up to 80%. An AI interaction costs around $0.50 compared to $6 or more for a human ticket. The business logic is clear. But what is efficient for the company is not always adequate for the consumer.

For routine interactions, chatbots work well enough. Checking a balance, tracking a delivery, resetting a password. But the calculus shifts when the service is essential and the consumer’s need is urgent. Being unable to reach your insurer during a hospitalisation, or finding your bank account frozen with no human recourse, is not a service inconvenience. It is a failure of access to a service the consumer has contracted and paid for.

In 2024, a Canadian tribunal ruled against Air Canada after its chatbot gave a passenger wrong information about bereavement fares. The airline argued the chatbot was a separate legal entity. The tribunal dismissed this, holding that a company is responsible for all information on its website, whether from a static page or a bot. Had the passenger been able to reach a human agent, the error would likely never have escalated to a tribunal.

India’s consumer protection architecture is extensive, but it was not designed with this problem in mind. The IRDAI requires insurers to appoint Grievance Redressal Officers and routes complaints through the Bima Bharosa portal, but nothing here mandates a human escalation when a chatbot fails. The RBI’s FREE-AI Framework encourages human-in-the-loop oversight for high-risk decisions like credit scoring, but does not extend this to customer service chatbots. TRAI regulates quality of service for telecom providers but has no framework addressing AI-mediated customer interactions.

The Consumer Protection Act, 2019 requires a grievance mechanism but never defines what it must look like, and has no provision addressing an AI-only system with no exit. MeitY’s AI governance guidelines focus on frontier model risks and deepfakes. The gap is not that India lacks consumer protection. It is that none of these frameworks address the scenario where the consumer’s only point of contact is an AI system that cannot help them.

Other countries have started to act, in different ways. Brazil’s Federal Decree No. 11,034 of 2022 is the most direct model. It requires service providers in federally regulated sectors to
provide at least eight hours of daily human telephone support. Bots are permitted, but consumers must always be able to reach a person. In the UK, the Competition and Markets Authority published guidance in March 2026 confirming that consumer protection law applies fully to AI interacting with consumers, and that it will act against AI that steers consumers toward company-beneficial choices or withholds material information.

In the Netherlands, the consumer and data protection authorities issued a joint statement in October 2025 warning that consumers are increasingly getting trapped in chatbot loops with no path to a human. The regulators stated that existing consumer law already requires human contact to be available, and called on the European legislature to codify this in the forthcoming Digital Fairness Act. None of these approaches ban chatbots. But each recognises that AI in essential services cannot operate without accountability to the consumer.

India does not need to replicate any single model. But three measures would address the gap. Sector regulators, such as IRDAI, RBI and TRAI etc., should mandate a clear human escalation pathway in AI-driven customer service. The Consumer Protection Act or E-Commerce Rules should recognise the absence of meaningful human support as a potential deficiency in service. And India’s AI governance framework should adopt a risk-based classification for chatbot deployment, distinguishing between essential services and low-stakes interactions.

This is not about resisting technology. For routine interactions, chatbots add value. But when a consumer in distress cannot get past an automated system to access a service they have paid for, the regulatory framework has not kept pace on this specific question. India is actively shaping its approach to AI governance. Ensuring that the AI most Indians encounter in their daily lives is subject to meaningful consumer safeguards would be a practical and timely place to start.

– Sumeysh is Partner at the public policy firm The Quantum Hub (TQH)

Leave A Reply

Your email address will not be published.