By Advocate Abhijith Balakrishnan, LL.M, CIPP/E, High Court of Kerala
In data protection laws worldwide, “legitimate interest” is often recognized as one of the lawful bases allowing organizations to process personal data without consent. Under the EU’s General Data Protection Regulation (GDPR), for example, a company can process data if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (Article 6(1)(f) GDPR), so long as those interests are not overridden by the individual’s rights. However, India’s new Digital Personal Data Protection Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a standalone lawful ground for processing.
Instead, the Indian law relies primarily on consent and a limited set of “legitimate uses” explicitly enumerated in the statute. This divergence raises important questions about the pros and cons of the legitimate interest basis, its impact on the free flow of data, and whether India might benefit from adopting a similar concept. Recent developments in Europe including a leaked proposal to broaden legitimate interests to cover AI training, flagged by privacy advocates like NOYB, further highlight the stakes of this debate. This article will present a neutral, fact-based analysis of both sides of the issue, examining legitimate interest as a lawful processing ground, how the DPDP Act approaches lawful bases, the implications for data flows and AI, and whether India needs a legitimate interest provision in its privacy regime.
Legitimate Interest as a Lawful Basis under GDPR
Under the GDPR, legitimate interest is one of six lawful bases for processing personal data. It serves as a flexible “catch-all” ground in situations where other bases (like consent, contract, legal obligation, vital interest, or public task) do not neatly apply.
To rely on legitimate interests, a data controller must meet three key conditions:
Purpose Test :– Identify a legitimate interest (a real business or societal benefit) for the processing. This could range from fraud detection, network security, and direct marketing, to analytics and internal administrative transfers.
Necessity Test :– Show that the processing is necessary to achieve that interest (i.e. reasonably effective alternatives that are less intrusive are not available).
Balancing Test :– Weigh the legitimate interest against the individual’s rights and freedoms. If the individual’s privacy interests override the business interest, then the processing cannot proceed without consent or another basis.
This balancing requirement means legitimate interest isn’t an automatic carte blanche; organizations must be able to demonstrate that they have considered the potential impact on individuals. For example, a company might have a legitimate interest in using client data to improve its services or prevent fraud, but it must ensure that this use does not unjustifiably harm the clients’ privacy or expectation of confidentiality. Regulatory guidance in Europe (from bodies like the European Data Protection Board) further emphasizes transparency, individuals should generally be informed if their data is processed under legitimate interest and gives examples where this basis may or may not be appropriate.
Pros of Legitimate Interest in GDPR: The major advantage of having a legitimate interest clause is flexibility. It allows beneficial data processing operations that don’t fit neatly into other categories. Businesses can pursue non-consent-based processing for legitimate purposes such as: improving network security (e.g. scanning traffic for malware), preventing fraud and abuse, personalizing user experience, or certain forms of marketing without needing to constantly burden users with consent requests. This flexibility is often seen as crucial for innovation and the free flow of data in the digital economy. It acknowledges that consent is not practical in every scenario, for instance, an e-commerce site may need to use an individual’s purchase history to detect fraudulent transactions (a legitimate interest) without pausing to ask for explicit consent each time. Legitimate interest can thus reduce compliance friction for businesses and enable data to be used in creative, responsible ways that benefit both organizations and consumers. It also avoids “consent fatigue” for users by eliminating the need for trivial or routine consent prompts when an appropriate interest and safeguards exist.
Cons and Safeguards: On the other hand, legitimate interest carries risks if misused or interpreted too broadly. Its open-ended nature can be a double-edged sword. Privacy advocates warn that some organizations might declare almost any profit-driven activity as a “legitimate interest” and use it to justify intrusive data processing without consent. There is an inherent subjectivity in the balancing test, businesses might be biased in judging their interests against individuals’ privacy. This can lead to overreach, where personal data is used in ways people do not expect or want. For example, undisclosed profiling or targeted advertising might be justified by a company as its legitimate interest, even though users might find such uses intrusive. Because of these concerns, GDPR requires careful assessment and documentation of the rationale when using this basis, and regulators can hold companies accountable if they get the balance wrong. Nonetheless, the critique remains that “legitimate interest” leaves a lot of discretion to data controllers and could undermine individuals’ control over their own data if not strictly enforced. It introduces some uncertainty for individuals about when and how their data might be used without consent, potentially eroding trust if organizations do not handle it transparently.
India’s DPDP Act Approach: Consent and Specific “Legitimate Uses”
India’s Digital Personal Data Protection Act, 2023 takes a notably different approach. The DPDP Act does not list “legitimate interests” or “contractual necessity” as general lawful bases for processing personal data. Instead, the law is largely consent-centric: Section 4 of the Act specifies that personal data can be processed only for a “lawful purpose” and either on the basis of the individual’s consent or for certain “legitimate uses.” In other words, consent is the primary ground for processing under Indian law, except in a closed set of exceptional scenarios where consent is not required. These exceptions (termed “certain legitimate uses” in the Act, defined in Section 2(d) read with Section 7) serve a role somewhat analogous to lawful bases in other laws, but they are specifically enumerated and limited.
Section 7 of the DPDP Act, 2023 lists the situations that count as “legitimate uses” where processing is allowed without the individual’s consent. These include:
Voluntary Data Sharing: When a data principal (the individual) voluntarily provides their personal data to the data fiduciary and does not indicate non-consent, it can be processed for the purpose for which it was provided. This essentially covers scenarios where an individual willingly gives information for a service, their consent is implied by their action unless they explicitly opt-out. (For example, if you voluntarily submit your details to book a cab or open a bank account, the company can use that data for the intended service without obtaining a separate consent form each time.)
Public and State Functions: Processing by government departments or authorities to provide services or benefits to individuals. If an individual has already given consent to avail a government benefit, or if their personal data is already available with the state (in a database or record), further processing to deliver or improve that public service is allowed. Moreover, any performance of a function under law by the State (e.g. law enforcement, issuing licenses, ensuring national security, or maintaining public order) is permitted. Essentially, this corresponds to what other laws might classify under “public interest” or “legal obligation” bases, the government doesn’t need consent for official duties and mandated services.
Legal Obligations and Judicial Orders: Compliance with any law in force or any court judgment/order that requires disclosure or processing of personal data. Organizations can process data to meet their legal obligations or pursuant to a court directive without consent (for instance, responding to a court subpoena or an order to produce documents). Similarly, orders from foreign courts relating to civil or contractual claims are covered, ensuring companies can comply with cross-border legal processes when needed.
Emergencies and Public Health: Data can be used without consent to respond to medical emergencies or crises, such as when there is a threat to someone’s life or immediate health. During epidemics, disease outbreaks, or public health threats, necessary processing (like contact tracing or sharing data for coordination of care) is allowed. Likewise, during disasters or breakdowns of public order – think earthquakes, floods, or riots – data may be processed to ensure safety, provide assistance, or support disaster response operations. These provisions resemble the “vital interests” concept in GDPR (life-and-death situations) and broaden it to public health and disaster management contexts.
Employment Purposes: The Act expressly recognizes processing for employment-related purposes as a legitimate use. An employer can process personal data of employees without consent when it is needed for standard employment activities, recruitment, payroll and benefits, attendance, performance evaluation, workplace security, or even investigations of misconduct. This also extends to protecting the employer from liability or loss (for example, preventing corporate espionage or safeguarding trade secrets). The rationale is that in the employer-employee relationship, requiring explicit consent for every HR or business operation would be impractical, and employees often cannot freely refuse consent to their employer anyway. By statutorily permitting these uses, the law attempts to balance business needs with employee privacy, so long as the data use stays within what an employee would reasonably expect in the employment context.
“Reasonable Expectations” in Context (Implied): While not explicitly labeled as a separate clause, the combination of the above provisions effectively aligns with the idea that certain processing within the context of the relationship and expectations of the individual is allowed. For instance, if a customer has provided data to a bank, it is understood the bank will use it for routine service operations like fraud monitoring or record-keeping, tasks the customer can reasonably anticipate as part of using the service. The DPDP Act’s first “legitimate use” (voluntarily provided data) alongside sector-specific rules essentially enforces that any non-consensual processing should not exceed what the individual would consider appropriate to that relationship or transaction. This concept of reasonable expectation acts as a boundary: it prevents abuse of the exception by ensuring that if a use would surprise or upset the average person in that scenario, the business likely should seek explicit consent instead.
In summary, India’s approach is a closed-list of allowable purposes rather than a broad, controller-determined balancing test. Notably absent from the DPDP Act’s list is any general provision for “legitimate interests” of private companies or processing “necessary for contractual performance” that isn’t covered by the individual’s voluntary provision of data. Indian law effectively says: beyond these listed contexts, any other purpose requires fresh consent from the individual.
Pros and Cons of the Indian Approach (No General Legitimate Interest)
India’s decision to omit a general legitimate interest clause has sparked debate. There are advantages and disadvantages to this approach, and its impact on data flows and innovation is a key consideration.
Pros / Rationale for Omission: From a privacy rights perspective, the absence of an open-ended legitimate interest basis means stronger individual control and legal certainty. The law explicitly tells citizens and businesses what the non-consensual exceptions are mostly common-sense or public interest scenarios and everything else by default requires consent.
This clarity can build trust, as individuals know that companies cannot unilaterally decide to use their data for new purposes under a vague “interest” claim. It also simplifies compliance in a way: companies have less subjective analysis to do (no complex balancing tests internally) if an activity isn’t on the allowed list, they must get consent or refrain. Privacy advocates might argue this is a pro-user approach that minimizes the risk of abuse. By keeping the law strict, India is signaling that the baseline expectation is consent, and exceptions are narrowly tailored by law, not left to corporate interpretation. This could reduce instances of data being repurposed behind the scenes, thereby protecting individuals from covert profiling or marketing uses they never agreed to. It may also make enforcement more straightforward regulators don’t have to second-guess whether a company’s “legitimate interest” claim was valid; they can simply check if the situation fits one of the defined categories or not.
Cons / Challenges: On the other hand, the closed-list approach can lack flexibility, potentially hindering beneficial data uses and the free flow of data in the economy. Businesses in India might find themselves constrained in cases where under GDPR or other regimes they would have been allowed to proceed under legitimate interest. For example, activities like improving a product using user data, running certain analytics, or performing internal research and development on data might not squarely fall under an enumerated “legitimate use” in the DPDP Act. In those cases, companies must fall back on obtaining consent from users (which can be logistically difficult and may lead to users not bothering to opt in) or simply forgo the data processing. This could slow down innovation or add compliance burden. Startups and tech firms have expressed concern that not recognizing legitimate interest might make Indian law out of sync with global norms, requiring more frequent consent pop-ups and potentially putting them at a competitive disadvantage. The “free flow of data” a phrase often used to denote the ease with which data can move and be utilized across sectors and borders – could be impacted if organizations are overly cautious or hamstrung by needing explicit permission for every new use of data. In a digital economy, data is a key resource; critics worry that India’s strict regime might create data silos where useful information isn’t shared or utilized for growth-oriented purposes (like AI development, research, or cross-company collaborations) because no clear legal basis exists to do so without running a massive consent campaign.
It’s also worth noting that even under GDPR’s legitimate interest framework, there is an implied requirement of accountability and responsible use, companies must be thoughtful and can’t just do anything they please. In India’s case, by not giving that discretion at all, the law errs on the side of caution but perhaps doesn’t trust organizations to strike the balance. This might reflect the regulators’ view on the maturity of compliance environments they might have decided that a broad legitimate interest clause would be prone to misinterpretation or misuse in the Indian context. The downside is that genuine good-faith processing that poses minimal privacy risk still faces a hurdle if not explicitly exempted.
Impact on International Data Flows and Alignment
The divergence between GDPR and India’s DPDP Act in lawful bases can have implications for international data transfers and global businesses. Many multinational companies operate in both Europe and India. Under GDPR, those companies might rely on legitimate interests for some processing operations; but for data pertaining to Indian data principals, they cannot invoke the same ground. This means companies must implement jurisdiction-specific compliance strategies, possibly maintaining separate data handling processes or consent mechanisms for Indian data vs. EU data. This lack of harmony could act as a barrier to seamless data flow between India and other markets, since what is lawful in one region may not be lawful in the other.
For instance, a European online service that personalizes content based on user behavior might do so under legitimate interest in the EU (with opt-out options for users). But if that same service has Indian users, the personalization might legally require explicit consent in India since it’s not a listed “legitimate use”. Such differences complicate data operations and could discourage companies from offering uniform services, potentially affecting Indian users’ experience or the availability of certain data-driven features. From a policy standpoint, some argue that aligning with frameworks like the GDPR (which has become a global benchmark) could facilitate cross-border data partnerships and make it easier for Indian companies to collaborate internationally.
On the other side, India’s firm stance might also be seen as a statement of data sovereignty and prioritizing user rights, which could in fact become a selling point if global sentiment shifts towards stricter privacy. If other jurisdictions see issues with the broad legitimate interest approach, they might eventually gravitate toward more consent-centric models too. Indeed, countries often watch each other’s experiments in privacy law; India’s model will be a significant case study in whether a largely consent-driven regime can support a thriving digital economy without a general “legitimate interest” clause.
The NOYB Report and EU’s Leaked Proposal: Legitimate Interest for AI
While India has chosen one path, Europe is itself re-examining the scope of legitimate interests, especially in light of rapid advances in artificial intelligence. In late 2025, news broke of a leaked draft proposal by the European Commission to amend the GDPR as part of a broader “Digital Omnibus” package. According to analysis by NOYB (None of Your Business, a European privacy NGO led by Max Schrems) and other privacy advocates, these draft proposals could significantly expand the leeway for companies to invoke legitimate interest particularly to facilitate AI development.
One of the most controversial ideas in the leak is that training or operating AI systems might explicitly be categorized as a “legitimate interest” under GDPR. In practical terms, this would mean that tech companies could use personal data to train artificial intelligence models without seeking consent, simply by claiming it as their legitimate interest. For example, an online platform could scrape or aggregate user data it has collected over the years and feed it into a machine learning model to improve its AI services or algorithms, without asking those users for permission for this new use of their data. Under current GDPR, that kind of secondary usage often would require consent or another specific lawful basis, especially if the data is sensitive or if users would not expect it. The leaked reform, however, seems driven by a desire to “boost AI innovation” by cutting perceived red tape, essentially giving AI activities a kind of privileged status.
Privacy advocates’ concerns: NOYB and others have reacted sharply, calling this a potential “wrecking” of core GDPR principles. They argue it would undermine the tech-neutral stance of the law GDPR was meant to apply equally regardless of technology, but here AI would get special exemption. A striking illustration given by NOYB is that if these changes passed, processing personal data in a traditional software or database requires a legal basis (like consent or legal obligation), but doing the exact same processing via an AI system could be automatically deemed legitimate interest. This could be seen as skewing the playing field toward AI-based processing and against more manual or traditional processing, ironically incentivizing companies to funnel data into AI to exploit the looser rules.
Moreover, allowing AI training under legitimate interest poses risks to individual rights. AI models often derive patterns from huge amounts of personal data including potentially sensitive information. The leaked proposals reportedly also seek to narrow the definition of “sensitive data” (GDPR Article 9) so that inferred sensitive traits (like using big data to guess someone’s health status or sexual orientation) might not be protected unless they are “directly revealed” by the data. Privacy experts worry that companies could legally justify profiling and inferring highly private facts about people for AI, which currently would be tightly regulated or outright prohibited without consent. For instance, an AI could analyze a person’s social media or purchase history to infer they are likely pregnant or have a certain political leaning; under existing GDPR this inference would likely be treated as sensitive personal data processing (requiring explicit consent or a vital interest/public interest ground), but the new proposal might not consider it “directly revealed” sensitive data, thus it could fall under legitimate interest. This reduces protections for individuals against automated profiling and decision-making.
Another angle is the impact on device privacy. NOYB noted that changes to ePrivacy rules are being considered alongside GDPR changes, which could permit companies to access data on user devices (like smartphones or computers) under more circumstances. Combined with a broad legitimate interest for AI, this might let companies gather information from personal devices such as app usage data or sensor data to feed into AI models, all without consent. For example, imagine an operating system manufacturer remotely scanning files or app data on user devices to improve an AI assistant, such actions today would likely violate privacy laws, but the future regime might permit it claiming “security” or “statistical purposes” under legitimate interest. Privacy groups label this a dangerous erosion of privacy, effectively enabling surveillance-like collection for the sake of AI development.
It’s important to note these proposals are not law yet. They reflect a tug-of-war between innovation advocates and privacy defenders. Proponents argue Europe must ease some GDPR constraints to stay competitive in AI, they see current rules as too restrictive, possibly causing Europe to fall behind the U.S. or China in the AI race. The European Commission has hinted that any GDPR tweaks are meant to “make the GDPR more operational, not to weaken it,” focusing on reducing burdens for businesses, especially smaller ones. However, the breadth of the leaked changes, from redefining personal data and limiting data subject rights, to broadening legitimate interests suggests a significant liberalization that critics say would be “deregulation by stealth” benefitting mostly Big Tech firms.
What This Means for India and the Legitimate Interest Debate
The developments in the EU lead to a natural question: Should India incorporate a legitimate interest clause in its law, or is it better off without it? The answer is not straightforward, and there are compelling points on both sides.
Arguments For Introducing Legitimate Interest in India:
Proponents of adding a legitimate interest basis to the Indian framework might argue that it could unlock more avenues for data-driven innovation in India. If carefully defined (perhaps with a statutory balancing test and guidance), legitimate interest could empower Indian companies to utilize data for things like AI training, fraud prevention, or service improvement without constantly relying on explicit consent. This could make Indian businesses more agile and competitive, aligning them with global practices. It would also reduce the compliance gap between India and jurisdictions like the EU, potentially easing international data exchange and interoperability. For example, a legitimate interest provision could allow Indian firms to engage in responsible analytics and machine learning on user data (with safeguards) to build better products, something that might currently require either clunky consent workflows or simply not be done. As AI and big data become crucial in every sector, not having such a provision might put domestic companies at a disadvantage or push them to find legally grey workarounds. Some also point out that Indian law already trusts organizations with context-based judgment in areas like the “reasonable purposes” in older drafts or the “reasonable expectations” concept implied in current Section 7, so why not explicitly have a broader but principled category like legitimate interest? If the EU does move towards legitimizing AI training under legitimate interest (despite the backlash), India might risk being left behind in the AI innovation curve if its laws are too rigid, unless it provides alternative mechanisms to achieve the same ends.
Arguments Against Legitimate Interest (Maintaining Status Quo):
On the other hand, there is a strong case for why India may not need a generic legitimate interest clause, at least not yet. The current DPDP Act structure reflects a deliberate policy choice prioritizing individual consent and narrowly defined exceptions, arguably more suitable for a country where data protection is new and public awareness is still growing. Introducing a broad legitimate interest ground could be premature in India’s regulatory environment.
Critics worry that without a deep-rooted culture of compliance and without decades of jurisprudence on data rights (unlike in the EU), giving companies a discretionary lever like legitimate interest might lead to excessive data exploitation in practice. They suggest it’s safer to start with a tighter regime and perhaps loosen it gradually via rule-making or case-by-case exemptions, rather than open the floodgates.
Additionally, India’s law already tries to cover necessary bases like government functions, emergencies, and employment in Section 7. If some genuinely beneficial processing purpose is found missing, the government can amend the law or issue rules to add it to the legitimate uses list, providing a controlled way to expand non-consensual processing grounds. This is a more cautious, calibrated approach than an open-ended clause. In essence, the absence of legitimate interest in DPDP Act could be seen as a feature to ensure trust in the system from day one showing citizens that their data won’t be used behind their back except in clear, limited situations. Given concerns over misuse of personal data in India (like unsolicited marketing calls, profiling by apps, etc.), a strict consent requirement might actually push companies to improve transparency and data hygiene, which in the long run builds a more sustainable data ecosystem.
Finally, the scenario playing out in Europe serves as a cautionary tale: even a mature regime like GDPR is wrestling with how companies are pushing the boundaries (e.g., Meta’s attempt to use “legitimate interests” or other bases for personalized ads, or now the AI loophole proposals). If India were to include legitimate interest, it would need robust safeguards and enforcement to prevent it from becoming a blanket excuse. Some might argue that India should first focus on effectively implementing the current law educating organizations and consumers, setting up the Data Protection Board, and penalizing clear abuses before adding complexity like a balancing test that would be harder to supervise.
Conclusion
The question of whether “legitimate interest” should be a lawful basis in data protection law strikes at the heart of the privacy vs. innovation balance. The European GDPR model uses legitimate interest to allow more free flow of data and flexibility, trusting organizations to act responsibly under regulatory oversight, but it now faces debates on how far to stretch this concept especially with AI in the picture. The Indian DPDP Act has for now chosen a different path: emphasizing individual consent and a tightly bounded set of non-consent grounds, thus erring on the side of privacy and user autonomy.
Both approaches have their merits and drawbacks. Legitimate interest, as seen in the GDPR, can be a useful tool to enable data uses that benefit the economy and even consumers (through better services, fraud prevention, etc.) without overburdening everyone with consent formalities. It arguably supports the free movement of data and international consistency. However, it demands a high degree of responsibility and maturity from data controllers, and even then, it can be misused or become a loophole for invasive practices if not strictly checked. India’s omission of this basis simplifies the landscape and offers clarity you either have consent or you fit an exception defined by law which can strengthen privacy protections and trust. Yet, it may also create friction for legitimate data-driven activities and put Indian companies under heavier operational constraints in the fast-paced digital economy.
The latest NOYB report on the EU’s leaked GDPR changes underscores how contentious the legitimate interest provision can be when stretched: turning AI training into a presumed legitimate interest alarms many because it could open doors for mass processing of personal data without consent. India, by not having such a broad clause, would inherently forbid AI companies from training on personal data unless they seek consent or anonymize data, arguably providing stronger default privacy. But at the same time, if the rest of the world moves to facilitate AI, India will have to consider how to stay competitive perhaps through other mechanisms or by revisiting its stance in the future with appropriate checks.
In conclusion, whether India “needs” a legitimate interest basis is ultimately a policy judgment about the kind of data economy and society it aspires to. The neutral observation is that there is a trade-off: including legitimate interest could spur innovation and align with global practices but carries risks of privacy erosion; excluding it bolsters individual rights and clarity but might slow certain data uses.
As India’s data protection regime matures and as global norms evolve, this debate will likely continue. By presenting both sides of the argument, we see that there is no one-size-fits-all answer – it comes down to how one weighs the importance of free data flow and innovation against the imperative of privacy and user trust. India’s current law chooses the latter emphasis; whether that remains true or changes with time will depend on real-world outcomes and the collective priorities of lawmakers, businesses, and citizens. Ultimately, striking the right balance will be key – ensuring personal data is protected from exploitation while still enabling progress in the digital and AI-driven age.