Artificial intelligence is rapidly transitioning from a support function to a central force shaping how enterprises operate, compete, and innovate. As AI evolves from predictive and generative capabilities to more advanced, autonomous systems, organisations are being pushed to rethink not just what to invest in, but also what to consciously deprioritise. For years, businesses have built robust data ecosystems centered around structured data, dashboards, and human-led decision-making. Roles such as data analysts, BI developers, and business analysts were critical in translating data into insights. However, with machines now capable of interpreting, reasoning, and even acting on data, many of these long-standing models are being fundamentally challenged.
At the same time, enterprises are grappling with legacy systems that were never designed for this new AI-first reality. These systems are often rigid, filled with hard-coded logic, and disconnected in ways that make modernisation complex. The challenge extends beyond technology to governance, scalability, and even business models, particularly as organisations struggle to move AI initiatives from pilot stages to full-scale deployment. In this conversation, Uday Hegde, CEO of USEReady, shares his perspective on navigating this shift, from redefining skill priorities and modernising data architecture to building scalable AI systems and preparing for a future where AI becomes as essential, and as ubiquitous, as a utility.
Which data or AI skills should tech leaders deprioritise today?
There are several skills that were extremely relevant not too long ago but are now losing their importance, especially when viewed in the context of what AI can do today. I often explain this using an analogy from telecom, how we moved from 3G to 4G and now to 5G. AI is evolving in a similar way, and what we are experiencing today is essentially the 5G phase of AI.
Earlier stages, such as predictive analytics and traditional data science, can be thought of as 3G. Generative AI represents 4G. Now, we are entering the era of agentic AI, which is far more advanced. If you think about it, the idea of AI dates back to the 1950s, when the question was whether machines could think. Today, that question has been answered. Machines can think, and that fundamentally changes which skills remain relevant.
Skills that relied heavily on human-driven thinking are now at risk of being deprioritised. In the data space, for example, tools like Tableau, Power BI, and ETL-based workflows were once considered cutting-edge. Entire roles were built around these capabilities, including data analysts, BI developers, and business analysts. However, many of these roles are now at risk because AI can perform similar tasks more efficiently.
This does not mean these skills will disappear entirely. Just as 3G networks are still used for basic tasks like texting, these skills will continue to exist in certain contexts. But they are no longer the priority. Leaders need to recognise this shift and focus more on capabilities aligned with the latest stage of AI evolution.
What is the toughest part of modernising legacy data architecture?
The biggest challenge lies in what I call semantic debt. In the past, data systems were designed primarily for human consumption. Humans would interpret outputs and make decisions based on them. That approach has fundamentally changed because machines are now capable of doing that interpretation.
Legacy systems are filled with hard-coded rules, embedded business logic, and naming conventions that were designed for human understanding. As a result, organisations often end up with disconnected systems, one where data is stored and another where decisions are made. These systems do not communicate effectively with each other.
A useful analogy is the transition from railways to automobiles. When cars were introduced, the entire infrastructure had to change because it was not designed for that mode of transport. Similarly, legacy data architectures are not built for AI-driven environments.
Another challenge is the rigidity of ETL and ELT processes, along with logic embedded in dashboards. These systems are difficult to migrate and become barriers to adopting AI effectively. AI works best with raw, unstructured data such as documents and files, whereas most enterprise data today is structured and stored in databases. That structured data represents only a small portion of what is actually needed for AI-driven decision-making.
When does it make more sense to build in-house data capabilities versus buying them?
The simplest way to approach this decision is to follow the money. Organisations should own what directly contributes to revenue and outsource what simply keeps operations running.
For instance, companies do not build their own electricity infrastructure; they treat it as a utility and pay for it. Data capabilities can be viewed in a similar way. If a capability is not central to your competitive advantage, it is often more efficient to outsource it.
Take the example of a pharmaceutical company. Its core value lies in research and drug discovery. That is where it should invest heavily and retain ownership. However, functions like marketing or supporting data systems can be outsourced without affecting the company’s core strength.
We have seen this across industries. In one case, we worked with a chemical company to build a platform that effectively replaced highly specialised expertise in designing complex materials. That capability was directly tied to their intellectual property and revenue generation, so it made sense for them to own it.
The principle is straightforward: invest in what makes you money and outsource what keeps the lights on.
How should leaders rethink data governance in AI-driven systems?
Data governance has traditionally been structured around human decision-making, with clear hierarchies and approval processes. However, in AI-driven systems, machines are making decisions continuously, which requires a fundamentally different approach.
Governance in this context needs to be dynamic and integrated into the flow of decision-making. It is no longer about having a checkpoint at the end; it is about ensuring that the right controls are in place throughout the process.
This includes designing systems that know when to involve humans. For example, in customer interactions or financial transactions, there may be situations involving bias, ambiguity, or risk where human judgment is necessary. AI systems must be able to identify these moments and escalate them appropriately.
In practice, this can be quite complex. In one project involving invoice processing, we found that while AI could handle most cases, exceptions such as handwritten notes required human intervention. These kinds of edge cases highlight why governance needs to be continuous and adaptive rather than static.
What distinguishes scalable AI systems from pilot-stage experiments?
One of the main reasons AI initiatives fail to move beyond the pilot stage is the gap between experimental performance and real-world expectations. Organisations are often impressed with early results, but those results do not always translate into production environments.
A key factor is accuracy. For an AI system to be viable in production, it typically needs to achieve an accuracy level of 90 to 95 percent or higher. Many pilot projects fall short of this, often reaching only around 70 percent. At that point, it becomes difficult to justify replacing existing processes, especially if those processes are already cost-effective.
Another issue is the nature of the data used in pilots. These projects often rely on curated datasets that represent ideal conditions. However, real-world data is far more complex and includes unexpected scenarios that the system may not have been trained on.
As a result, when these systems are deployed, they encounter situations they cannot handle effectively. This leads to errors, loss of trust, and ultimately, the abandonment of the project. Scaling AI requires working with real production data and investing in improving performance across all possible scenarios.
What is USEReady’s roadmap for the near future?
We have been working in the AI space for several years, which puts us in a strong position as the technology evolves. Initially, our focus was on enabling decision-makers with insights. Today, we are expanding that role to become an AI execution partner for our clients.
This means supporting organisations across the entire AI journey, from building literacy and defining principles to implementing frameworks and accelerating execution. We are investing in creating solutions and capabilities that allow our clients to scale AI effectively.
Looking ahead, one of the biggest challenges for the industry is the business model itself. Unlike traditional software, pricing AI is not straightforward. It is still unclear whether organisations should pay based on usage, outcomes, or some other metric. This is something the entire industry is trying to figure out.
Over time, AI is likely to become more like a utility, something that organisations rely on as a fundamental part of their operations. In that scenario, companies like ours will play a critical role in enabling and managing that infrastructure. It is a significant opportunity, and we are focused on positioning ourselves for that future.
From deprioritising legacy skills to redefining governance and rethinking business models, the conversation underscores a clear shift in how organisations must approach AI. What stands out is not just the pace of change, but the need for clarity in decision-making, what to build, what to let go of, and where to invest.
As Uday Hegde points out, the transition to an AI-first world is not just about adopting new technologies, but about unlearning old approaches. For leaders, the real challenge lies in navigating this transition with both speed and intent, ensuring they are not just keeping up with the change, but actively shaping it.