By Rajat Deshpande, Co-Founder & CEO of FinBox
What if a single technology could inject billions of dollars into the global banking industry annually? Well, that’s not hypothetical. The McKinsey Global Institute predicts generative AI will add between $200 billion to $340 billion to the total industry revenue each year. This just reaffirms that AI is bound to fundamentally reshape how we lend, save, and manage money.
Despite all its brilliance, this technology isn’t a magic wand free from limitations or concerns. As with any transformative innovation, AI comes with its fair share of scepticism and misconceptions.
Trust is the ultimate currency in finance, so it’s crucial to separate fact from fiction. Before we progress to the next phase of AI-led lending, let’s pull back the curtain on some common myths surrounding AI’s role in the industry.
Myth #1: AI is only for the big guns
It’s easy to think that AI is exclusively feasible for financial giants with deep pockets and endless resources. After all, a complex, revolutionising technology will likely demand a multi-million-dollar investment, placing it far out of reach for smaller banks, credit unions, or regional lenders. However, this couldn’t be further from the truth.
While some cutting-edge tools may come with a hefty price tag, the playing field for AI adoption is rapidly leveling. Smaller financial institutions are finding ways to weave AI into their operations without breaking the bank.
According to McKinsey and Company, AI is steadily becoming an invaluable ally for small businesses, acting as an early warning system for loans heading south. This allows lenders to proactively step in, offer support, and prevent potential defaults, fostering stronger customer relationships.
Government bodies are also providing support. Reserve Bank of India (RBI), India’s top banking regulator, launched MuleHunter.AI. This AI-powered solution is specifically designed to empower smaller banks, including cooperative banks and rural banks, to fight against mule accounts. While major financial institutions often possess sophisticated in-house systems, MuleHunter.AI is equipping smaller players with advanced AI and machine learning capabilities to efficiency detect fraudulent accounts. It’s a clear signal that AI isn’t just for the big guys, it’s a tool that can democratise security and efficiency across the financial landscape.
Myth #2: AI is God
With predictions of generative AI improving banking efficiency by 46% and the Indian AI-in-finance market set to cross a staggering ₹1.02 lakh crore by 2033, it’s easy to fall into the trap of believing AI is an infallible entity. The truth is, while AI holds immense potential, it still has a long way to go in becoming explainable and unbiased, developing deeper contextual understanding, and operating within a clearer regulatory framework.
It’s a double-edged sword. On one hand, it offers incredible opportunities and on the other, it opens the door to sophisticated threats. According to IBM Institute for Business Value, 47% of executives are genuinely concerned that adopting generative AI could lead to entirely new forms of attacks on their AI models, data, or services. There’s more — almost every executive anticipates a security breach within the next three years as a direct result of adopting AI.
Deepfake fraud is another major concern. These fake but plausible AI-generated videos and audios are undermining trust at lightning speed. And for financial institutions, the damage extends far beyond monetary losses.
The resulting erosion of trust and reputational damage may be difficult to quantify but can have devastating long-term consequences. A KPMG survey revealed that 72% of organisations consider reputational damage to be the severest consequence of fraud.
The risk of misinformation leading to fraudulent transactions is a growing concern across the industry, fuelled by the increasing accessibility and sophistication of AI. The first half of 2024 alone saw a 223% spike in deepfake-related tool trading on the dark web forums, compared to the same period the previous year.
This isn’t just theoretical. A 71-year-old retired doctor in Hyderabad recently lost over ₹20 lakh in an online investment scam. The bait? An AI-generated video of the country’s Finance Minister endorsing a fraudulent trading platform.
Myth #3: AI automatically spells compliance disasters
Given AI’s complexity and new risks it introduces, it’s easy to assume that deploying AI automatically creates a shaky ground for compliance. Concerns of black box algorithms, biased decision making, breaches of data privacy naturally arise, leading to a misleading perception that AI is a rogue agent that can’t be effectively regulated.
However, the RBI is taking a proactive stance with regulations designed for fairness and transparency. Its Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) provides a set of guiding principles that compel entities to balance innovation with ethical responsibility and consumer protection.
Building trust in an AI-powered future
So, if AI isn’t an unstoppable, faultless force, what happens when it errs? When an algorithm makes a mistake or a system falters, who ultimately holds the accountability?
These are not easy questions, but they will define the next frontier in an AI-driven lending landscape. It is the collective responsibility of policymakers, regulators, innovators, and financial institutions to establish clear governance standards and accountability for the ethical use of AI in lending.