Agentic AI is not ‘more AI’—it’s a new way of running the enterprise: Dr Jagdish Bhandarkar, Deloitte India

As enterprises push ever deeper into automation, a new class of AI systems is beginning to take shape, one that goes far beyond familiar recommendation engines or task-specific co-pilots. Agentic AI represents a significant step forward, introducing autonomous decision loops, real-time orchestration across multiple systems and the ability to learn continuously from outcomes. This shift is prompting organisations to rethink long-established assumptions about governance, data stewardship, risk management and even the fundamental architecture of their operating models. Rather than simply enhancing existing processes, agentic systems are challenging leaders to reconsider how work is initiated, supervised and validated in an environment where software can act with increasing independence. In this context, Dr Jagdish Bhandarkar, Partner and CDO at Deloitte India, offers a timely perspective on how this evolution is reshaping the very way modern enterprises function, and what it will take for organisations to adapt responsibly and competitively.

How do you see Agentic AI shaping enterprise technology strategies in the coming years?

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. The organisations gaining early value understand that agentic AI is as much a management innovation as it is a technological evolution. They are defining where machines should take the first pass, where humans must remain actively involved and how accountability works when software can initiate actions.

At the same time, we help clients moderate expectations. Early market evidence shows that many so-called “agentic” initiatives are re-scoped or abandoned when the value case is vague. We therefore advise companies to begin with bounded, high-leverage slices of work such as resolving close-process variances, triaging L3 tickets with automated remediation or handling procurement exceptions before attempting to scale. Analysts have highlighted issues of “agent-washing” and over-reach, so organisations should prioritise clarity around decision rights, strong fail-safes and a disciplined business case.

For leaders, the overarching aim is to improve decision velocity while preserving traceability. Agents should accelerate time-to-action but must do so with a verifiable chain of intent, data sources and approvals.

Autonomous systems raise questions about governance and trust. How can businesses use Agentic AI responsibly while keeping proper oversight?

Organisations should begin with a robust and recognised governance framework rather than relying on simple slogans. We use NIST’s AI Risk Management Framework (AI RMF) with its Govern–Map–Measure–Manage cycle and tailor profiles for each use case. This provides boards, risk teams and product teams with a common vocabulary for understanding hazards, controls and required evidence.

They should also establish an AI Management System. ISO/IEC 42001:2023 is quickly becoming the global reference model for an auditable AI operating structure, defining policies, roles, lifecycle controls and continuous improvement practices. We have seen organisations move more quickly and coherently when they adopt this standard.

Compliance must also be integrated from the start. The EU AI Act introduces a formal, tiered risk system. Companies working in or selling into Europe should build their conformity pathways now, including risk classification, technical documentation, monitoring procedures and user transparency measures. This significantly reduces the cost and disruption of retrofitting later and improves trust across all markets.

Finally, engineering teams should embed ethics into design. IEEE 7000 offers a method for translating stakeholder and societal values into system requirements exactly where critical agent-behaviour decisions are made.

Cloud adoption has matured, but many companies still struggle with data silos. How do you see cloud and data strategies evolving to support AI and analytics at scale?

First, organisations are moving from platform-centred thinking to treating data as products. Each data product has an owner, a service-level agreement, a contract and appropriate documentation so that downstream agents and systems can rely on it. This approach, often described as data mesh, addresses the root causes of silos: unclear ownership and inconsistent quality.

Second, the lakehouse is becoming the default enterprise backbone. For most companies, a governed lakehouse architecture provides the right combination of flexibility, data-warehouse performance and rigorous governance. It supports the needs of both batch and streaming workloads and enables features required for machine-learning and agent-driven systems.

Third, data quality is increasingly being addressed at the point of creation. AI magnifies even small data defects, so preventing errors as data is entered or generated offers the fastest return on investment. Data contracts, automated quality gates and transparent quality metrics at the source are becoming essential.

Ultimately, scaling AI is less about acquiring extra tools and more about adopting a clear data operating model with defined ownership, shared standards and platform automation that makes compliant behaviour simple.

With increasing digital transformation, cybersecurity threats are also growing. How should enterprises rethink security for this new landscape?

Zero Trust remains the foundation. Identity-based, context-aware and policy-driven access controls, as defined in NIST SP 800-207, are essential. These principles must now be extended to address AI-related risks. Agents broaden the attack surface by calling external APIs, using tools and executing instructions. Security teams need to anticipate prompt injection attacks, data exfiltration through tool use, model-level denial-of-service attacks and supply-chain weaknesses. OWASP’s Top 10 for LLM and GenAI applications provides an excellent practical starting point.

Enterprises should give equal attention to availability and integrity threats. ENISA’s 2024 guidance highlights availability attacks and ransomware as top concerns. Because agentic automation can increase the impact of any breach, strong containment is crucial. We recommend least-privilege access for tools, robust output validation, mandatory human oversight for high-impact actions and tamper-evident logging to support both investigation and regulatory obligations.

What do you see as the biggest obstacles to enterprise adoption of AI-driven technologies overall, and what can organisations do now to prepare?

A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated.

Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential.

Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks. Organisations should adopt data-product thinking, modernise to a governed lakehouse architecture and invest in data quality at the source.

Operating model and skills also play a role. Agentic AI influences organisational design, decision-making and accountability. An AI Ops & Assurance capability—covering red-teaming, agent and model observability, policy enforcement and post-incident learning—is increasingly important.

Finally, organisations should prepare for the compliance runway. Even outside Europe, the EU AI Act will inform global expectations. Companies should start mapping use cases to risk categories and preparing documentation, human oversight processes and post-market monitoring systems.

Agentic AI is not merely a more advanced form of AI; it represents a fundamentally different way of running an organisation. The leaders in this space will combine confident experimentation with disciplined governance. They will begin with narrow, well-defined loops, implement strong instrumentation and scale only what proves safe, fair and valuable. Balancing speed with stewardship is how organisations will build lasting advantage over the next 12 to 24 months.

AICDOCybersecurityDeloittedigital transformationLLMpartnerthreats
Comments (0)
Add Comment