A Chief Technology Officer’s vision board for 2024 and beyond

By John Roese, Global Chief Technology OfficerDell Technologies

About this period, the world starts manifesting what they want to see in the year ahead. While most vision boards are focused on personal goals, my version as a chief technology officer has emerging technologies instead, with AI squarely in the center. The democratization of AI has thrust it into the forefront of most CIOs’ minds. Here we’ll unpack what lies ahead. I also want to make sure you have room on your board for other technologies that will enter the fold in a more pronounced way – think Zero Trust, edge and quantum computing.

Without further ado, my vision board for 2024 and beyond…
Vision 1: The GenAI dialogue will move from theory to practice, shifting from training infrastructure and cost to inferencing and cost of operation — with more leadership onus.
While GenAI has sparked incredibly creative ideas of how it will transform business and the world, there are very few real-world, scaled GenAI activities. As we move into 2024, we will see the first wave of GenAI enterprise projects reach levels of maturity that will expose important dimensions of GenAI not yet understood in the early phases. That will include:
Shift from training infrastructure to inference infrastructure. 2023 has been all about picking models, deciding on what data to use and where it will live, and other front-end system design topics. In 2024 those systems will shift into production or inferencing. How do we best design this infrastructure? Where do we put it? How do we secure it? Those answers will lead us in 2024 to best practices and optimal outcomes.

Economic discussions will shift from cost of training to cost of operation. The cost to train or fine-tune a model can be high and the infrastructure demands significant, but we expect that to be a small part of the investment to apply GenAI to an enterprise. The price tag for training is tied to one-time model size and data set use. The cost of inferencing is tied to utilization level (transactions), user base size, data type (video, chat, etc.), ongoing maintenance and associated data. An LLM fine-tuned to provide customer care may be inexpensive to build, but when transitioned to production for millions of customers and billions of transactions, creates significant infrastructure demand and operating cost.

Enterprises will shift from broad experimentation to a top-down strategic focus on choosing the few GenAI projects that can be transformational. Today, most enterprises are experimenting with GenAI and looking to make it a central part of their digital transformation. In 2024, they will realize that even if hundreds of use cases appear to be potentially transformational, no one will have the people, infrastructure or budget to put more than a handful of them into scaled production. That will necessitate a top-down prioritisation that focuses on moving only the most important and valuable GenAI projects into production.

Vision 2: The supply chain and ecosystem of enterprise GenAI will improve in 2024.
Our ecosystem of AI tools and services is expanding, diversifying and scaling. As a result, we will see diversification of AI frameworks (e.g., the new Linux Foundation ULX project). We will also see developers able to easily use and create interfaces to multiple types of accelerated compute and integrated frameworks with new diverse accelerators like PyTorch on the client side, ONYX, and other open standard AI runtime frameworks on the infrastructure side that will democratise the co-pilot acceleration space.

As my colleague Jeff Clarke noted, keep an eye on the expanding availability of both closed and open-source models and tools, which will help enterprises implement GenAI. We anticipate the industry shift to openness will continue from foundational models to accessories and tools. We expect there to be an abundance of tools and models as we move forward.

Vision 3: Zero Trust will become real in 2024
We’ve spent 2023 talking about Zero Trust and its importance to cybersecurity. In 2024, Zero Trust will evolve from a buzzword to a real technology with real standards, and even certifications emerging to clarify what is and is not Zero Trust.

In 2024 Dell’s project Fort Zero will be delivered into the market as the first commercial full Zero Trust private cloud system. While it will initially be focused on meeting the needs of the most demanding customers in the world (defense departments, etc.), its launch, certification, and use cases will show every industry what the end state of Zero Trust adoption might look like.

We expect Zero Trust to be mandated in a wide range of industry use cases. This will kickstart the development of real Zero Trust architectures for industries ranging from core defense to universities performing government-funded research, to critical infrastructure (industrial and digital). And with it, certifications will emerge that correct one of the major issues with Zero Trust – anyone can call something Zero Trust even if they only embrace parts of it and do so in fragmented point solutions. Zero trust only works as a comprehensive architecture for IT systems. In 2024 we will see these certifications begin to separate real Zero Trust from marketing.

Vision 4: Edge platforms will emerge.
Modern edges are a new concept. A modern edge is the extension of the multicloud infrastructure into the real world (factories, hospitals, stores, etc.). Today the default for extending a cloud service to the real world is with a point solution that only delivers edge for the specific cloud workload. The challenge with this “cloud extension” model is that as you use more clouds and cloud services, there is a proliferation of edge systems… one for each cloud, workload, and system.

Dell, with Dell NativeEdge, has been developing an alternative path for modern edge – a multi-cloud platform. NativeEdge gives an alternative path to customers to build a common edge platform that can provide capacity, security, and trust to any software-defined edge workload from any cloud, IoT, or IT system.

In 2024, we expect this “edge platform” approach to become much more prevalent with other companies and ecosystems delivering simpler edge platform-centric models. An early example of this approach is Multi-access Edge computing (MEC) in telecom systems. 2024 will be a good year for enterprises to recognise that there are two ways to build a modern edge: the proliferation of mono-edges or as a multi-cloud edge platform.

Bonus Vision: Quantum Computing and GenAI become “entangled.”
Looking out a bit further, it is now very clear to some of us that quantum computing and AI are two parts of the same story. Today we are seeing explosive growth in AI use cases and early adoption with no signs of it slowing down. However, one of the major issues with GenAI and most large-scale AI is the extreme demand for computing resources.

Transformers, diffusion models, and other new techniques under GenAI are extremely resource-intensive probabilistic functions. It turns out that quantum computing is exceptionally good at highly scaled optimisation problems where the goal is to find the best answer to a question within an almost infinite set of options. You could call a quantum system a probabilistic computer.

While we still have work to do to scale quantum systems beyond their current 1000 qubit range, it’s now very clear that as quantum systems scale, the mathematics they do is ideal to take over some of the core processing of advanced AI systems. It will be several more years before we see real impact but taking a long view, it’s clear to me that the computing foundation of modern AI will be a hybrid quantum system.

AI work will be spread across a set of diverse compute architectures and one of those architectures will be quantum processing units. When that happens, we are likely to see many orders of magnitude increase in the ability of AI systems. If you were surprised by the massive leap forward when GenAI emerged late last year, you should expect that a possibly bigger jump forward in AI will happen when quantum and AI intersect in our not-too-distant future.

In conclusion, we know AI is happening and evolving quickly. We also know it is not independent of other emerging technologies. AI is the center of the universe and edges are the way that you’ll put it into production. Just like Zero Trust is probably the way that you’ll end up securing it, and ultimately quantum will be the thing that powers it over the long term to give you kind of the performance and efficiency that you’re going to need to make this scale to a global type of system. So, I encourage you, to actively think of AI but do not do it independently of other architectures – this is how you’ll make sure your visions and actions align for long-term success.

AIMulti Cloudtechnologyzero trust
Comments (0)
Add Comment