Express Computer
Home  »  Guest Blogs  »  DPDP rules vs. employee AI usage: Are Indian companies prepared?

DPDP rules vs. employee AI usage: Are Indian companies prepared?

0 5

By Sudhir Kothari, CEO & MD, Embee Software

India’s Digital Personal Data Protection (DPDP) Act marks a significant step in formalising how organisations collect, process, and safeguard personal data. While the legislation is still evolving through rules and clarifications, its intent is clear: accountability, consent, and control must sit at the centre of enterprise data practices.

At the same time, another shift is unfolding inside organisations, often far more quietly. Employees across functions are adopting generative AI tools to write emails, analyse data, draft presentations, summarise meetings, and automate everyday tasks. This adoption is rarely driven by formal policy. It is organic, fast, and largely invisible to IT and compliance teams.

The result is a growing tension between regulatory expectations under DPDP and the reality of how AI is being used inside Indian organisations today.

The invisible expansion of workplace AI

Most enterprises did not formally “roll out” AI to their workforce. Instead, AI entered through individual productivity tools, browser-based assistants, freemium applications, and embedded features inside collaboration platforms. Employees use them to save time, meet deadlines, and reduce manual effort.

From an IT leadership perspective, this creates a visibility problem. Data may be copied into AI prompts, uploaded into third-party tools, or processed outside approved environments often without malicious intent. What was once an internal document is now potentially leaving the organisational boundary.

Under DPDP, this matters. The Act places responsibility on the organisation, not the individual employee, for how personal data is processed, stored, and protected.

DPDP and the question of accountability

One of the most important shifts introduced by DPDP is accountability. Organisations are expected to demonstrate the following:

  • Clear purpose limitation for data use
  • Consent management and lawful processing
  • Reasonable security safeguards
  • Traceability of data access and handling

In a traditional IT environment, these controls are enforced through systems, policies, and audits. AI disrupts this model because decision-making and content generation increasingly happen outside structured workflows.

When an employee pastes customer information into an external AI tool to summarise feedback, who is accountable for that data transfer? When HR teams use AI to shortlist candidates, where do consent, explainability, and fairness sit? These questions are no longer theoretical; they are operational risks.

The compliance gaps leaders must acknowledge

Most organisations are further along in DPDP readiness on paper than in practice. The gaps become clear in four areas.

  1. Shadow AI usage

IT teams often underestimate how widely AI tools are already in use. Without telemetry, access controls, or clear guidelines, AI adoption remains largely unmonitored. This mirrors the early days of shadow IT, except the data exposure risk is significantly higher.

  1. Data governance blind spots

DPDP places emphasis on purpose limitation and data minimisation. Yet employees routinely input more data than necessary into AI tools, often unaware of how that data may be stored, reused, or trained upon by external providers.

  1. Vendor and tool risk

Not all AI tools offer enterprise-grade data protection, residency controls, or contractual safeguards. Without a structured vendor evaluation framework, organizations may unknowingly expose regulated data to platforms that fall outside DPDP-aligned governance.

  1. Lack of AI-specific policies

Many organisations rely on general IT usage policies that do not address AI explicitly. As a result, employees are left to interpret what is “acceptable use” on their own, creating inconsistency and risk.

CFOs and CIOs: A shared risk surface

While DPDP is often discussed as a legal or compliance issue, its implications extend directly into financial and operational risk. CFOs increasingly need to account for the following:

  • Compliance budgets for data protection and AI governance
  • Potential penalties and remediation costs
  • Reputational risk linked to data misuse or AI-related incidents

For CIOs and IT leaders, the challenge is execution. Controls must be practical enough not to stifle productivity while being robust enough to satisfy regulatory scrutiny. This requires closer alignment between finance, IT, legal, and risk teams than many organisations currently have.

Cloud, hybrid architectures, and AI control

Cloud-native and hybrid architectures play a critical role in resolving this tension. Centralised identity, access control, data classification, and monitoring provide the foundation for governing AI usage at scale.

When AI capabilities are embedded within trusted enterprise platforms, organisations gain greater control over where data flows, how it is processed, and how outputs are logged. This is significantly harder to achieve when employees rely on fragmented, consumer-grade tools.

The architectural choice, therefore, is not just about performance or cost; it directly affects compliance posture.

From policy to behaviour change

One of the most underestimated aspects of AI governance is behaviour change. Employees are not intentionally bypassing controls; they are optimising for speed and convenience. Addressing this requires more than restrictive policies.

Organisations that are making progress focus on:

  • Clear guidance on acceptable AI use cases
  • Training that explains why controls exist, not just what is prohibited
  • Providing sanctioned tools that meet productivity needs safely

This approach reduces resistance and improves adoption of compliant alternatives.

Preparing before regulation catches up

Regulation tends to follow technology adoption, not lead it. DPDP enforcement will mature over time, but organisations cannot afford to wait for clarity on every edge case.

The organisations best prepared for 2026 are those treating AI governance as part of core digital hygiene, similar to identity security or data classification, rather than a future compliance exercise.

The real question for Indian enterprises is not whether employees will use AI. They already are. The question is whether organisations can create guardrails that allow innovation to continue without exposing them to regulatory, financial, and reputational risk.

In the AI-first workplace, preparedness is not defined by having the right tools but by having the right foundations, governance, visibility, and accountability to use them responsibly.

Leave A Reply

Your email address will not be published.