The OpenText–Ponemon study finds enterprises racing into GenAI without adequate security foundations

OpenText, in collaboration with the Ponemon Institute, has released a new global report highlighting a widening gap between the rapid adoption of generative AI and the maturity of security and governance frameworks required to manage its risks.

Titled “Managing Risks and Optimising the Value of AI, GenAI & Agentic AI”, the research reveals that while 52% of enterprises have already fully or partially deployed generative AI, many are doing so without the foundational controls needed to ensure secure and responsible usage.

The findings point to a broader industry challenge, where AI adoption is accelerating faster than organisations’ ability to govern it effectively. According to Muhi Majzoub, true AI maturity depends not just on deploying tools but also on embedding security, governance, and transparency from the outset to build trust and deliver meaningful outcomes.

The report indicates that only one in five enterprises has reached a level of AI maturity where systems are fully deployed with assessed security risks, while fewer than half have implemented risk-based governance frameworks. This lack of maturity is further reflected in persistent gaps around data privacy, bias mitigation, and regulatory compliance.

A significant proportion of organisations continue to struggle with core AI risks. Many report difficulty in minimising model bias, managing prompt-related risks such as inaccurate or harmful outputs, and preventing the spread of misinformation. At the same time, nearly six in ten respondents believe AI makes regulatory compliance more complex, yet less than half have implemented AI-specific data privacy policies.

These governance gaps are also impacting the effectiveness of AI systems. While enterprises are deploying AI to enhance efficiency, particularly in cybersecurity operations, concerns around trust, reliability, and explainability are limiting outcomes. Only about half of respondents consider AI effective in detecting threats or reducing detection time, while issues such as errors in decision rules and data inputs continue to affect operational reliability.

The research further underscores that fully autonomous AI remains a distant goal. Fewer than half of organisations express confidence in AI systems’ ability to make safe, independent decisions, and a majority still rely on human oversight due to the evolving nature of threats and risks.

The report concludes that as AI becomes increasingly embedded in enterprise operations, organisations must prioritise governance frameworks, policy-based controls, and continuous monitoring. Without these foundations, the ability to scale AI responsibly and derive long-term business value will remain constrained.

Comments (0)
Add Comment