The AI-regulation dilemma

By Rohit Taneja, Founder & CEO, Decentro

The general principles of economics state that regulations filter available information, which builds intellectual capacity and markets for certain innovations in the country. The more laissez-faire a regulatory framework, the more the information flows, aiding more people specialising in a subject matter and creating a market for their services. Time and again, markets and expertise have been created or killed through regulation. While a restrictive regulatory framework might be detrimental to India’s AI and financial inclusion potential, a laissez-faire approach might prove to be a threat to India’s vast population by promoting existing bias and creating cybersecurity threats. With India being considered the third most robust country in the world when it comes to AI innovation, conversations around regulating AI, especially in the BFSI sector, need to be viewed through a first principles lens.

The AI-regulation dilemma decoded
India faces a unique challenge when it comes to regulating AI. Firstly, it does not have
workable examples that can be implemented in the Indian context. On one side, there is the
US example, which has taken a very lenient approach to regulating AI, focusing more on
innovation while stating some concerns over discrimination and privacy. On the other side,
there is the EU example, where a very restrictive policy to regulate AI has come into play,
involving companies running constant iterative evaluations of risks, imposing audit trails
and hefty fines in case of violations. In the middle, there is the UK model, which has taken a
pro-innovation approach to regulating AI, putting in place an agile model that learns from
actual experiences and adapts accordingly. Sadly, none of these frameworks applies to
India. These models cannot work in a country like India, where the stakes of bad apples in
the AI systems are very high. Moreover, following the wrong approach might hinder
innovation in the services sector, the largest contributor to our GDP.

Secondly, India has a large population, most of whom are not digitally literate. Hence, the
need for a strong regulatory framework becomes all the more important for their safety. The
magnitude of harm in case of no regulations and the benefit in case of good regulation is
immense and one that cannot be ignored. This is the crux of India’s AI regulation dilemma
and the reason for it going back and forth on its opinion on regulating the technology
throughout 2023. However, despite the confusion, two things are clear. GenAI is a
revolutionary technology that can significantly change the way we bank in India. At the
same time, the costs of not having robust regulations are huge and cannot be ignored.

The first principle of AI regulation

AI, especially in the BFSI sector, is a fast-evolving landscape, and any regulatory framework
set today will become outdated tomorrow, creating newer risks and challenges. To avoid
playing catch-up, the regulatory framework for AI must focus on the first principle pillars of
data safety of consumers and enterprises, unbiased algorithms that can take into
consideration existing biases and build them into the model, and systemic security that does not create contagion in the banking system as a whole.

Keeping these first principles in mind, the AI-regulatory framework must have three pillars:
1. Strengthen existing digital public infrastructure (DPI)
AI models are only as strong as the data that is fed to them. To make AI models stronger,
multiple large banks are consolidating the information they have and building AI models on
top of it. However, this creates an obvious bias in their models. The banks, through these
models, will continue to get the same type of customers, leaving behind people who are not
in the formal banking fold. This is where a strong DPI can play a pivotal role. The coverage
of Aadhar, Jan Dhan, and phone connectivity in India is almost universal. These need
strengthening. These data structures must not only be made more secure but also the
government must work with the industry to create secure APIs to access this data and
include them in the AI models to promote inclusivity. Knowing that these data sets are
included in the model, the output can be estimated, and any major divergence from the
output can be flagged.

2. Create an exchange of best practices and fraud
While integrating AI models with DPIs might solve the bias problem, AI algorithms have a
way of producing outputs over some time that are unforeseen. In this case, having an
exchange where best practices like results of algorithm audits, cases of fraud, or unforeseen
outputs can be shared. This will help other players in the industry to adapt to these
divergences and include them in their algorithms to make them safer.

3. Use AI to monitor AI
Just like antiviruses used to crack down on the algorithm of viruses, AI can be used to detect divergences in the system and flag them off automatically. Once it is flagged, it can give suggestions on how to rectify this divergence and automatically put the information in
exchange for other industry players to take a look.

Both AI innovation and the BFSI industry are critical for India’s economic growth, but if not dealt with well, they might prove disastrous. Creating a regulatory framework is important, but not at the cost of innovation. Our regulators have understood this. Hence, as the next steps, thinking from these first principles might help in making our digital infrastructure for banking stronger.

AIBFSItechnology
Comments (0)
Add Comment