By 2028, half of all organisations are expected to adopt a zero-trust approach to data governance, driven largely by the rapid spread of unverified AI-generated data, according to new predictions from Gartner.
The shift reflects a growing realisation among enterprises that data can no longer be assumed to be trustworthy by default. As AI-generated content becomes increasingly difficult to distinguish from human-created data, organisations are being forced to rethink how they authenticate, verify, and govern information used for business-critical decisions.
“Organisations can no longer implicitly trust data or assume it was human generated,” Gartner noted, warning that unchecked AI-produced data could undermine both financial and operational outcomes unless stricter verification mechanisms are put in place.
AI data growth raises new systemic risks
The warning comes as enterprises accelerate investments in generative AI. Gartner’s 2026 CIO and Technology Executive Survey found that 84% of organisations expect to increase GenAI spending in 2026, signalling that AI-generated data volumes will continue to surge across enterprise systems.
This trend introduces a lesser-known but increasingly serious risk: model collapse. As large language models are trained on vast pools of web-scraped data—much of which already includes AI-generated content—future models may end up learning primarily from outputs created by earlier models. Over time, this feedback loop can degrade accuracy, amplify bias, and reduce the models’ ability to reflect real-world conditions.
Beyond technical concerns, regulatory pressure is also expected to intensify. Gartner expects some regions to mandate stricter controls to verify whether data is “AI-free,” while others may adopt more flexible regimes. This fragmented regulatory landscape will add complexity for global organisations managing data across jurisdictions.
Why zero-trust is becoming unavoidable
Against this backdrop, Gartner argues that zero-trust data governance—where data is continuously authenticated, verified, and monitored rather than implicitly trusted—will become essential. Central to this approach is the ability to identify, tag, and track AI-generated data throughout its lifecycle.
Active metadata management is expected to play a decisive role. Organisations that can continuously analyse and update metadata will be better positioned to flag stale, biased, or unreliable data before it affects business decisions. Over time, Gartner sees this capability becoming a key differentiator between organisations that scale AI safely and those that struggle with data integrity.
What organisations should do now
To prepare for this shift, Gartner recommends several strategic steps. Enterprises should consider appointing a dedicated AI governance leader responsible for zero-trust policies, AI risk management, and compliance. Cross-functional collaboration between cybersecurity, data and analytics, and business teams will also be critical to assess where AI-generated data introduces new risks.
Rather than starting from scratch, organisations are advised to build on existing data and analytics governance frameworks—updating policies around security, metadata, and ethics to reflect the realities of AI-generated content. Adopting active metadata practices, Gartner says, will help enterprises detect when data needs recertification and prevent inaccurate information from silently propagating through critical systems.
As AI becomes embedded deeper into enterprise decision-making, Gartner’s message is clear: trust in data can no longer be assumed. Organisations that move early toward zero-trust data governance will be better equipped to scale AI responsibly—while those that delay risk building their future on increasingly uncertain foundations.