Is your data ready for trustworthy GenAI?

As India accelerates toward becoming a $1 trillion digital economy by 2026, the urgency to deploy responsible and scalable AI solutions has never been greater. Against this backdrop, Express Computer, in collaboration with Confluent and Google Cloud, recently hosted a thought-provoking roundtable discussion titled “Is Your Data Ready for Trustworthy GenAI?” on 27th June at JW Marriott, Bengaluru. Apart from Confluent and Google Cloud, the discussion saw the active participation of Acko, NoBroker, Licious, Cars24, Yubi, Gupshup Technology and Razorpay. 

The session brought together technology leaders, engineers, and product owners to deliberate on the readiness of enterprise data architectures for the age of generative AI. The discussions focused on three major dimensions—real-time data streaming, AI engineering challenges, and governance frameworks—all crucial for building trustworthy GenAI applications.

Real-time data is the foundation

A recurring theme across the panel was the increasing necessity of real-time data. Industries such as banking, retail, healthcare, and logistics are experiencing a paradigm shift in how they capture, process, and utilise data to support AI-driven decision-making.

Several studies underscore this trend. IDC notes that organisations leveraging real-time data streaming are witnessing a 44% higher return on investment, with some Indian enterprises reporting up to fivefold ROI. Gartner predicts that by 2025, 75% of enterprises will adopt real-time data streaming architectures, a significant leap from just 20% in 2022.

From fraud detection and inventory management to customer servicing and policy underwriting, businesses are actively embedding real-time pipelines to enhance responsiveness and improve accuracy. These capabilities are not just improving operational efficiency but are becoming core differentiators in highly competitive markets.

Engineering for GenAI: Fast-paced, fragmented, and expensive

While the promise of GenAI is significant, building and running such systems is proving to be complex. “AI can really go and build a lot of complex problems and be able to run it at scale,” said Harsh Singla, Account Director, Confluent, setting the tone for discussions around the engineering realities of GenAI adoption.

The engineering leaders at the roundtable emphasised that the velocity of change in GenAI models is outpacing traditional software development cycles. What was cutting-edge just a few months ago often becomes obsolete quickly due to newer, more efficient, and cheaper models.

This rapid innovation cycle creates unpredictability, especially in infrastructure and cost management. Organisations are finding it difficult to forecast monthly cloud bills as GenAI workloads fluctuate wildly. The shift from predictable infrastructure consumption to variable, model-based computing makes budgeting and capacity planning a challenge.

Tool fragmentation further adds to the complexity. Developers are experimenting with a wide variety of GenAI platforms, plugins, and sandbox environments, which makes it difficult to standardise development, enforce security policies, or maintain consistency across teams. As GenAI becomes more democratised, engineering leaders are being forced to rethink how they govern access, enforce CI/CD workflows, and ensure quality at scale.

GenAI in production: Beyond POCs

Despite the growing interest, many enterprises are still stuck in the early phases of adoption, running multiple proofs of concept without fully transitioning to production-grade systems. The discussion highlighted the pressing need to move from experimentation to measurable outcomes.

Organisations that have deployed GenAI in live environments shared examples across sales, customer service, internal policy management, and risk modeling. Semi-autonomous agents, often referred to as “co-pilots,” are being used to support customer acquisition, process automation, and document interpretation. However, the implementation maturity varies by function. Well-defined, structured workflows such as claims management or policy updates are seeing higher levels of automation, whereas more complex, personalised tasks like insurance sales still require human oversight.

Governance and trust: The next frontier

A major barrier to GenAI scalability is the lack of unified data governance. Gartner has forecasted that by 2027, 60% of organisations will fail to realise AI value due to fragmented governance models.

Enterprises are grappling with challenges around data lineage, schema enforcement, and reliability. In scenarios where multiple AI agents interact and generate data autonomously, it becomes difficult to attribute origin, validate accuracy, or ensure compliance. Questions about the trustworthiness of AI outputs—especially in decision-critical systems—remain largely unresolved.

Security concerns were a recurring theme throughout the discussion. Arijit Dutta, Country Head Strategic ISV GTM – India & SAARC, Google Cloud, noted, “Security in the AI world is a shared responsibility, it is the biggest of all conversations happening around today,” underscoring the collective effort needed across engineering, infrastructure, and business teams to ensure safe and trustworthy AI deployments.

Building deterministic systems that trace every output back to structured, governed data is emerging as a top priority, particularly in regulated industries. Yet, there’s no one-size-fits-all approach—especially when platforms are expected to operate across industries with constantly evolving needs.

Future outlook: Smaller models, bigger possibilities

Looking ahead, the consensus among the speakers was clear—AI will become increasingly lightweight, decentralised, and integrated into everyday workflows. The shrinking size of GenAI models, including those capable of running locally on edge devices, will unlock new use cases that do not depend on large-scale infrastructure.

There is also a clear trend toward hyper-personalisation and multi-modal interfaces. Traditional interfaces such as forms and dashboards are expected to give way to more intuitive, voice- and chat-based systems. This evolution will significantly expand access to AI for populations previously excluded due to digital literacy gaps.

As GenAI tools mature, smaller teams and startups will be empowered to build enterprise-grade solutions. Infrastructure—which had taken a back seat over the past decade—will once again become central to competitive differentiation. With GPUs in high demand and limited supply, organisations are revisiting their infrastructure strategies, debating the merits of in-house clusters versus cloud-based AI workloads.

Bridging technology with purpose

The roundtable discussion closed with a reminder that AI development should not be driven by hype or novelty alone. For GenAI to deliver sustained value, it must be aligned with real business outcomes, deliver measurable ROI, and address gaps in inclusion and accessibility.

Ultimately, the success of GenAI will depend not just on model accuracy or compute speed but on the human decisions that shape its deployment. Organisations that can build trustworthy AI on a foundation of clean, real-time data and agile governance will lead the next wave of digital transformation.

ConfluentGenAIGooglecloudRoundtable
Comments (0)
Add Comment