Express Computer
Home  »  News  »  Responsible AI moves from principle to practice, but data and regulatory gaps persist: Nasscom

Responsible AI moves from principle to practice, but data and regulatory gaps persist: Nasscom

0 2

Responsible AI (RAI) is fast becoming a business imperative rather than a purely ethical discussion for Indian enterprises. A new report by Nasscom shows that companies which feel confident about scaling AI responsibly are also the ones that have invested in mature governance frameworks—though access to quality data and regulatory clarity remain unresolved challenges.

Released at Nasscom’s Responsible Intelligence Confluence in New Delhi, the State of Responsible AI in India 2025 report is based on a survey of 574 senior executives across large enterprises, SMEs, and startups involved in AI development or adoption. Conducted between October and November 2025, the study provides a year-on-year comparison with 2023 and points to a clear shift from awareness to action.

Maturity is rising, led by large enterprises

According to the report, around 30% of Indian businesses have already established mature Responsible AI practices, while another 45% are actively implementing formal frameworks and policies. This represents a significant improvement over 2023, when RAI was still largely conceptual for many organisations.

The data shows a strong correlation between AI maturity and responsible practices. Nearly 60% of companies that say they are confident about scaling AI responsibly already have mature RAI frameworks in place. Large enterprises are leading this transition, with 46% reporting mature practices. Startups and SMEs trail behind at 16% and 20% respectively, but Nasscom sees this as ecosystem-wide momentum rather than a gap, given the growing willingness among smaller firms to learn, comply, and invest.

Sector-wise, BFSI emerges as the most mature at 35%, followed by technology, media and telecom (31%) and healthcare (18%). Across these industries, nearly half of organisations are actively advancing their Responsible AI frameworks.

From compliance to trust and accountability

Speaking at the launch, Nasscom leaders underlined that Responsible AI is now foundational to trust and brand credibility, especially as AI systems influence decisions in sensitive domains such as finance, healthcare, and public services.

Workforce enablement has become a central pillar of this transition. Nearly nine out of ten organisations surveyed are investing in sensitisation and training around Responsible AI. Companies report the highest confidence in meeting data protection obligations—reflecting relatively mature privacy frameworks—but monitoring-related compliance continues to be a concern.

Accountability for AI governance still sits largely at the top. About 48% of organisations place primary responsibility with the C-suite or board, though 26% are beginning to shift this to departmental heads. AI ethics boards and committees are also gaining traction, particularly among mature organisations, where nearly two-thirds have constituted such bodies. However, some companies remain cautious about how effective these forums are in practice.

Hallucinations, data quality and regulation remain pain points

Despite visible progress, the report highlights persistent challenges. On the risk front, hallucinations are the most commonly reported issue (56%), followed by privacy violations (36%), lack of explainability (35%), and unintended bias or discrimination (29%).

Implementation barriers vary by company size. Lack of high-quality data (43%) is the most frequently cited constraint overall, followed by regulatory uncertainty (20%) and shortage of skilled personnel (15%). Regulatory ambiguity is a particular concern for large enterprises and startups, while SMEs point to high implementation costs as a major hurdle.

Agentic AI raises the bar further

As AI systems become more autonomous, Responsible AI is increasingly seen as the deciding factor for whether organisations can scale with confidence. Nearly half of mature organisations believe their current frameworks are prepared to handle emerging technologies such as agentic AI. At the same time, industry experts caution that most existing frameworks will need substantial updates to address new categories of risk introduced by more autonomous systems.

The report concludes that sustained investment in skills, governance mechanisms, high-quality data, and continuous monitoring will be essential. Enterprises that embed responsibility across the entire AI lifecycle—rather than treating it as a compliance checkbox—will be better positioned to scale safely and build long-term trust.

For India, Nasscom argues, the opportunity goes beyond adoption at scale. The real test of leadership will be whether the country can set global benchmarks for trustworthy, human-centric AI that delivers broad societal value.

Leave A Reply

Your email address will not be published.