Artificial Intelligence is no longer confined to proofs-of-concept — it is shaping the very core of enterprise operations. According to McKinsey, 65% of organizations globally have adopted at least one AI capability, and more than half of early adopters report measurable value creation. Generative AI, in particular, is fueling this momentum: Goldman Sachs estimates it could add up to $4.4 trillion annually to the global economy, with sectors like banking, healthcare, and retail standing to gain the most.
Yet, the rush to adopt AI comes with its own set of challenges — from model accuracy, compliance, and security to the pressure of showing ROI in just 12–18 months. Enterprises are moving beyond experimentation, allocating dedicated budgets, restructuring teams, and looking to trusted partners for frameworks that balance innovation with responsibility.
Express Computer spoke with Singaravelu Ekambaram, Senior Vice President and Global Head of Delivery, Americas at Cognizant, to understand how client expectations are shifting, why Small Language Models (SLMs) are emerging as a compelling alternative to generic LLMs, and what it takes to operationalize AI responsibly at scale. He also shares how Cognizant is preparing clients for high-pressure, real-time demands — and where the next wave of AI-led transformation is likely to unfold.
Some edited excerpts:
From your vantage point, what are the biggest shifts you’re seeing in client expectations as GenAI becomes mainstream?
The evolution of AI has been extraordinary, with Generative AI (GenAI) marking a pivotal milestone in technological advancement. Our clients are increasingly eager to harness its transformative potential to reshape their businesses. However, this innovation brings unique challenges, often requiring strategic realignment and operational overhaul. To ensure long-term success, clients are adopting a holistic approach to AI, focusing on contextual performance and continuous improvement across their organizations.
Enterprise-wide adoption is gaining momentum, particularly in areas such as content summarization and generation for Sales & Marketing, as well as productivity enhancement in Software Engineering. As clients transition from proof-of-concept pilots to full-scale implementations, they are initiating organizational changes and allocating dedicated budgets to support AI programs. The overarching goal is to drive improved business outcomes and elevate customer experiences.
This shift is also fostering new collaborations between technology providers and service organizations, aimed at unlocking the full value of GenAI. By applying proven implementation frameworks, we’ve helped clients move from PoC to production across diverse scenarios — for instance, delivering hyper-personalized marketing content for a retailer, modernizing contact centre operations for a bank, and automating appeals and grievance management for a healthcare enterprise.
Cognizant is pioneering Small Language Models (SLMs) purpose-built for industries. How do these compare with large, generic LLMs in delivering measurable client outcomes, and what early adoption stories stand out?
Large Language Models (LLMs), trained on vast internet-scale datasets, are designed to perform a wide range of tasks with human-like creativity and reasoning. While their capabilities are impressive, enterprise clients face several challenges when adopting them — such as high operational costs, slower response times, lack of domain specialization, increased hallucinations, and the significant effort required to train them with enterprise-specific context and data. Data privacy and security concerns further complicate their deployment.
As GenAI evolves into Agentic AI, new paradigms for application development are emerging — especially for repetitive tasks requiring specialized skills. In such cases, LLMs may be excessive and suboptimal. This shift has led to the rise of Small Language Models (SLMs), which are purpose-built, cost-effective, and highly contextualized to enterprise data and workflows. SLMs offer better control over customization, training, and governance, and can be deployed securely within an enterprise’s edge or cloud infrastructure. They deliver faster, more accurate responses tailored to specific functional areas.
Recent examples include:
A custom SLM for a manufacturing major to transform field service operations.
A healthcare payer SLM to analyse claims and flag compliance issues or potential errors.
A domain-specific SLM for an education and information services client to power customer-facing products.
We are actively partnering with NVIDIA to craft industry-specific niche SLMs and LLMs. Our Agentic AI solution for Payer Claims runs on an SLM, reflecting our commitment to democratizing AI. By investing in Agent Foundry, a platform that blends SLMs with orchestration tools — we’re building adaptive, intelligent ecosystems that accelerate ROI while ensuring ethical AI governance. This positions us as a trusted partner for future-ready, responsible innovation.
In industries like banking and healthcare — where compliance, security, and accuracy are non-negotiable — what frameworks are you putting in place to operationalize AI responsibly at scale?
Responsible AI is essential across all industries and should be a foundational principle for any AI implementation. However, sectors with stringent regulations and low tolerance for errors — such as Healthcare and Banking — must exercise heightened caution. In these environments, the stakes are higher, and the consequences of AI missteps can be significant.
Cognizant applies a standards-based approach to Responsible AI. It begins by defining a clear set of foundational principles and translating them into actionable standards. These principles — documented under various categories such as Transparency and Explainability, Fairness and Inclusivity, Security and Privacy-enhanced, Accountability and Auditability, Human empowerment, Safety and Reliability, and Sustainability and Scalability — are then operationalized through standard organized in three dimensions: Foundational, Operational and Strategic. This is to ensure that Responsible AI is embedded across the entire lifecycle of AI systems supported ably by structured governance, enabling tools and continuous evaluation.
Our approach is further reinforced by processes and tools embedded across the AI lifecycle, alignment with global frameworks and standards such as ISO/IEC 42001, risk appropriate controls mapped to use cases and deployment. Complementing this is our TRUST™ platform, a modular interoperable platform featuring core components like risk assessment tools, fairness, and privacy safeguards, testing oversight, and monitoring dashboards for scalable Responsible AI implementation.
This structured approach ensures that our principles are consistently translated into measurable, actionable standards while maintaining flexibility to adapt to evolving requirements. Cognizant also has dedicated Responsible AI practice and an appointed Chief Responsible AI Officer to drive standardized approach across all programs.
Your mandate includes ensuring “zero downtime” in sectors like retail during peak seasons and navigating fortnightly regulatory shifts in healthcare. Can you walk us through how your delivery playbooks adapt to these high-pressure, real-time demands?
GenAI inherently helps address these challenges. These models can simulate potential scenarios triggered by known events that typically cause usage spikes, enabling proactive mitigation before issues arise. They can also automate the tracking and interpretation of regulatory changes, delivering rapid impact assessments that support swift, informed action.
Our delivery playbooks essentially blend the traditional reliability engineering concepts and DevOps with GenAI controls to design of self-healing applications. These applications are tightly integrated with the predictive models that feed into the CI/CD pipelines, enabling real-time code updates with zero downtime and rollback capabilities. The playbooks emphasize modular system design, allowing components affected by frequent regulatory changes to be updated minimal impact on other parts of the system.
This methodology is embedded within our robust Neuro IT Operations platform, which provides clients with a unified view while transitioning them into AI-led autonomous operations. Another example is our Intellipeak framework — a peak management framework that ensures readiness and seamless execution during high-demand periods, maintaining business continuity for our clients.
Many CIOs are under pressure to show ROI from AI investments within 12–18 months. How do your teams ensure delivery translates into tangible business impact — not just pilots or proofs-of-concept?
GenAI has garnered significant interest, with enterprises eager to experiment and capitalize on its potential. While it has gained traction in software engineering and sales & marketing, it has yet to fully penetrate core functional use cases across industries. Key challenges include high costs, model accuracy, talent availability, data privacy and security, and the complexity of customizing models with enterprise- specific context and data.
Cognizant is developing an end-to-end solution to address these challenges comprehensively. We are reskilling and upskilling our associates. We recently completed the world’s largest Vibe Coding event to drive AI literacy across all levels of the organization. The event earned a GUINNESS World Records title for the most participants in an online generative AI hackathon. Our AI labs produce world-class software, from productivity tools to multi-agent orchestrators, driving AI adoption across enterprises. We also have robust autonomous governance models and a Responsible AI framework to help clients jumpstart their AI journey.
Our strategic collaborations with major players enrich our joint go-to-market strategies and solutions. The evolution of Agentic AI will further accelerate adoption and drive better results by addressing core business areas. Cognizant’s AI readiness assessment toolkit and Agentic AI use case fitment framework help clients identify foundational activities and prioritize high-value use cases.
We offer an end-to-end AI consulting and implementation package to help clients lay the foundational groundwork for secure, scalable, and well-governed AI solutions. Our three-vector approach focuses on enabling hyperproductivity, industrializing AI, and agentifying the enterprise. As part of industrializing AI, we help our clients rebuild their tech stack to meet future AI demands using our differentiated solutions tailored for heterogeneous landscapes.
Looking ahead, what excites you most about the convergence of AI, domain expertise, and delivery excellence—and where do you see Cognizant leading the charge in the next 2–3 years?
At Cognizant, we realize the immense potential of AI solutions and believe that it is going to disrupt the way businesses are conducted today. As early adopters, we’ve been built an end-to-end portfolio of offerings, ranging from software engineering to robust AI governance models. These offerings empower clients to confidently identify and implement AI solutions aligned with their business objectives. Cognizant is currently well-positioned with right mix of frameworks, solution accelerators, assessment toolkits, software products, and industry collaborations to serve as a trusted advisor and implementation partner.
Cognizant has invested in AI-infused platforms such as Flowsource, Neuro IT Ops, Neuro SAN, Neuro Ignition to bring in differentiation and accelerate GenAI adoption across engineering and operations. We are also making significant investments in re-skilling and upskilling to support the domain-led GenAI adoption at scale.
Our three-vector strategy drives AI-led innovation across engineering and business processes:
V1: Focusing on hyperproductivity,
V2: Modernizing cloud and data platforms to industrialize AI at scale
V3: Enabling the agentification of enterprises
This strategy positions Cognizant to lead the next wave of AI-driven transformation.