In an exclusive interaction with Express Computer, Kishan Sundar, Chief Technology Officer, Maveric Systems, offers a comprehensive perspective on the rapidly evolving role of AI in the banking sector. He delves into the key structural and architectural challenges that banks encounter when attempting to scale AI initiatives beyond initial pilot stages, emphasising that success lies not just in experimentation but in building robust, enterprise-grade foundations.
Sundar underscores the critical importance of strong data ecosystems, scalable architecture, and well-defined governance frameworks as essential enablers for sustainable AI adoption. He further discusses how financial institutions can enhance their cybersecurity posture in an increasingly complex threat landscape, while carefully balancing the need for innovation with stringent regulatory requirements.
What are some of the key challenges banks face in scaling AI from pilot initiatives to enterprise-wide implementation?
Banks do not struggle with AI pilots, but they face an issue with scale, as the underlying architecture is not designed for it. Most institutions still maintain multiple versions of the same data, which are spread across product lines and channels. Any AI model built on this foundation will exhibit inconsistent behaviour, which is why pilots show promise but enterprise deployment stalls.
A second challenge is the gap between business expectations and technical readiness. Business teams want contextual AI outputs, but the systems feeding the models are not standardised or observable enough to support high-risk decisions. Without lineage, monitoring, and clear ownership, AI cannot be deployed at scale. The issue here is not model quality but architectural maturity. Banks can scale AI meaningfully when they can resolve data interoperability issues, establish a unified AI governance process, and embed observability into every layer.
What are the most practical cybersecurity measures banks are deploying today, given increasing threats and regulatory demands?
An effective security control in banking focuses on reducing basic hygiene failures, as breaches do not come from zero-day exploits. Breaches occur due to credential theft, unpatched systems, overly permissive access, and gaps in monitoring. Banks are fixing the problem by enforcing zero-trust, strengthening identity management, and instrumenting systems for continuous visibility.
Structured cyber awareness programmes are a critical factor to combat phishing, and hence, there is a renewed focus on the human layer. Banks are investing in incident response playbooks, automated compliance checks, and regular breach simulation exercises to meet regulatory requirements.
Good security is a combination of engineering discipline and consistent employee behaviour, supported by clear controls and real-time observability.
What are the risks of fragmented technology modernisation in banking, and how can CTOs create a cohesive architecture roadmap?
Fragmented modernisation creates long-term technical debt. When banks upgrade systems in isolation, they introduce new silos, inconsistent data models, and complex integration patterns. This increases the total cost of ownership, slows change, and weakens customer experience because the bank behaves like several disconnected systems rather than one institution.
CTOs need a domain-aligned roadmap, not a tool-aligned one. Architecture should evolve around business capabilities, common data standards, and cloud-ready, API-first principles. A data fabric can help unify access, and enterprise observability provides the visibility needed to manage a distributed estate.
Modernisation should be measured not by the number of upgraded systems, but by how much friction is removed from data flow, change delivery, and customer journeys.
How is generative AI influencing customer experience and product personalisation in banking, and what guardrails should financial institutions put in place?
GenAI adds value when it reduces effort in high-friction workflows. Banks are using GenAI to simplify onboarding, accelerate loan evaluations, and make servicing more accessible through context-aware assistants. GenAI’s strength lies in speed and consistency
Where personalisation is concerned, GenAI enables faster analysis of behaviour and intent, which helps banks offer more precise credit, risk, and investment recommendations. Relationship managers benefit from a complete view of the customer, generated in real time.
Guardrails matter because GenAI systems can hallucinate, drift, or produce biased outputs. Banks need strict controls around data protection, model explainability, PII masking, jailbreak prevention, and human review for high-risk decisions. If a model’s behaviour cannot be traced, it should not be used in regulated workflows.
With regulatory frameworks evolving rapidly, how can banks balance compliance with innovation while adopting AI-driven decision systems?
Banks can innovate only if their AI systems are controllable. That requires governance embedded into the model lifecycle, not added after deployment. High-risk systems need clear lineage, audit trails, fairness testing, and explainability from day one.
Compliance is not a blocker. It becomes an accelerator when banks automate monitoring, bias evaluation, and policy checks as part of the pipeline. Engaging regulators early reduces uncertainty and makes it easier to validate new AI-led decision models.
The real balance comes from disciplined architecture: transparent models, observable data flows, and accountable ownership. When these elements are in place, innovation becomes safer and faster.
What future skill sets and organisational structures are becoming critical for banks to build and sustain an AI-first technology ecosystem?
AI-first banks need teams that combine domain depth with strong engineering fundamentals. Skills in applied AI, data governance, cloud-native architecture, and model risk management are becoming essential. Domain specialists from credit, risk, payments, and compliance will remain central because AI without context does not scale.
Banks are moving towards federated AI models. A central excellence team defines standards, reusable assets, and governance, while domain teams build case-specific implementations. This avoids duplication while preserving contextual relevance.
New roles in AI observability, model security, and AI operations are emerging as banks start treating models like critical production systems. The institutions that invest early in these capabilities will see faster and more reliable AI adoption.