Express Computer
Home  »  Exclusives  »  AI maturity begins with governance, not code: Rahul Jain, Fidelity International

AI maturity begins with governance, not code: Rahul Jain, Fidelity International

0 341

When OpenAI’s ChatGPT burst into the public consciousness in late 2022, it felt like a sudden revolution, a moment when artificial intelligence leapt from the hands of engineers to the fingertips of millions. But for Rahul Jain, Director of AI Platform at Fidelity International, it was anything but sudden. “It’s a classic technology play,” he says. “You’re standing on the shoulders of giants and extending their work further.”

Jain traces today’s AI breakthroughs back to 2017, when Google’s BERT model introduced transformer architectures that now underpin large language models. “For us, that was huge,” he recalls. “We were working with smaller, embedding-driven models then. BERT laid the foundation for everything that followed.” Even earlier, the deep learning renaissance of 2010–2012 laid the groundwork for today’s neural revolution.

By the time the generative AI wave hit, Jain shares that Fidelity International had already spent years building the scaffolding necessary to absorb it. “We set up our AI Centre of Excellence in 2019,” says Jain. “Back then, there was no ChatGPT and not much industry excitement around AI. That gave us breathing room to focus on the fundamentals, not just engineering, but also governance.”

Laying the groundwork for responsible AI

That early focus culminated in Fidelity International’s Responsible AI Framework, authored in 2022, before the EU AI Act, before generative AI dominated headlines. The framework outlined how a global financial enterprise should approach AI adoption responsibly, anticipating the regulatory and ethical questions that would soon define the field.

“So when ChatGPT arrived,” Jain reflects, “we were ready. We already had the technology stack, the framework, and the governance in place. That allowed us to move quickly, and safely.”

The results speak for themselves. In just two years, Fidelity has rolled out a dozen production-grade AI applications and maintains over 100 ongoing generative AI experiments across its global operations. From legal document compliance to client interaction analysis and conversational agents like Freya, a new AI interface for UK personal investors, the company’s AI ecosystem is both expansive and evolving.

Scaling responsibly: Crawl, walk, run

But scaling AI in a regulated enterprise isn’t about moving fast and breaking things. Jain emphasises an incremental approach: “We didn’t attempt to build earth-shattering applications on day one. We started small, simple, low-risk use cases, to build confidence and understand the productionisation journey.”

This “crawl-walk-run” philosophy paid off. The team learned not only to deploy AI efficiently but also to instill trust among business users. “When we built early applications for our client services and operations teams, it gave them confidence that AI delivers on its promise, it’s not just hype,” he points out. 

Among Fidelity’s internal tools, Jain mentions, are AI-driven email triage systems that cut response times by a third, and productivity co-pilots that support employees in daily communication and software development. “The beauty of it,” Jain adds, “is that not everyone at Fidelity is a native English speaker. AI helped level that field, making communication more inclusive and confident.”

Governance, platform, and people: The three pillars

Jain also informed that Fidelity’s AI maturity rests on three interlocking pillars, i.e., platform, governance, and people.

On the engineering front, Jain’s team built an enterprise-grade AI platform with strong guardrails. “It’s easy to build generative AI apps,” he says. “You get an API key and start coding. But that’s not scalable.” Fidelity’s platform ensures compliance with GDPR, data localisation, and security norms, even anticipating the EU AI Act before its passage.

On the governance side, Jain emphasises collaboration across functions rarely linked before AI, risk, legal, compliance, and technology. “AI is not a pure technology play,” he insists. “You need to think about it holistically. That’s why we worked early to define how control functions engage with AI.”

And finally, on the people side, Fidelity invests heavily in reskilling and AI literacy. Through structured learning modules, workshops, and internal communities, employees learn how to use AI effectively, even something as simple as writing better prompts. “If you have basic AI awareness,” Jain says, “your ability to harness AI multiplies exponentially.”

The hiring challenge, however, remains. “It’s unrealistic to find one person who knows everything, ML, GenAI, AI Ops, and safety,” he admits. Fidelity therefore focuses on nurturing internal talent with domain expertise, supplementing them with specialised hires in key roles.

India at the core of a global AI network

Though Jain’s team operates globally, India plays a pivotal role. “It’s hard to call it just an India centre,” he says. “We’re a global team, but India is a strategic hub.”

The AI Platform and COE in India now drive much of Fidelity’s innovation and leadership. “The centre has moved from being an execution base to one that leads with thought leadership and innovation,” Jain notes. The India team collaborates closely with counterparts in Dalian and the UK, ensuring 24×7 development cycles and a truly global outlook.

AI agents and small models

As the industry buzzes with the next frontier, agentic AI, Jain’s perspective is grounded and pragmatic. “We don’t see ML, GenAI, and agents as separate categories,” he explains. “A good, complex application blends all three.”

Fidelity already uses agent-based systems for content generation and marketing. But the real frontier, Jain suggests, lies in combining agents with small, fine-tuned foundation models rather than training massive models from scratch. “We’re not in the business of building LLMs,” he clarifies. “We’re in the business of using them to deliver better outcomes.”

Instead, Fidelity plans to fine-tune smaller, domain-specific models, a strategy aligned with efficiency, privacy, and business focus. “We see real potential in coupling these with agentic systems and Model Context Protocols,” admits Jain.

Conversational AI and the next phase of investment

Looking ahead, Jain sees two clear investment priorities. “First,” he says, “is redefining how customers communicate with us, moving from websites and apps to truly conversational, natural interfaces.” Fidelity’s Freya is an early glimpse of that future.

“Second,” he adds, “is generating alpha, embedding AI across our research and asset management processes. That’s where AI can truly transform the business.”

Meanwhile, the company continues to explore how AI can reshape software development itself. “AI-assisted engineering is already changing how we build,” says Jain. “The next disruption will be even bigger. Faster development means faster innovation.”

For all the futuristic talk about AI agents and autonomous reasoning, Jain’s message is refreshingly grounded: responsible AI isn’t about chasing hype, it’s about building foundations early.

“Many people discovered AI through ChatGPT,” he reflects, “but the real story started years before that, with the quiet groundwork in governance, platforms, and people. That’s what lets us scale confidently today.”

Leave A Reply

Your email address will not be published.