By Prasant Agarwal
The evolving compute landscape preparing for AI revolution It took almost 20 years for AI powered AlphaGo to beat GO world champion, Ke Jie, from the time IBM Deep Blue defeated Gary Kasparov. The pace of AI had remained slow for those two decades. However, the last few years have seen technology cover decades in terms of advancement and adoption. AI involves heavy computation, and it requires significant large and specific compute power
The pursuit of satisfying computer hunger looms large of tech giants. Still, it was principally governed by more predictable Moore’s law, which helped create faster, smaller, cheaper for over six decades. As we approach the end of Moore’s law, the compute capability is still a few thousand times slower than biological benchmarks. Additionally, chipmakers predict that cost per-yielded millimeters will double when they move from 14/16 nm to 7 nm. There is a sudden spurt in activities around the otherwise slow world of semiconductors.
For many decades, the players have mainly remained predictably. But in the last few years, we have see big-ticket mergers. Intel purchased Altera, AMD bought Xilinx in pursuit of creating superior computing. Nvidia wanted to consolidate lead in AI processing by acquiring ARM. On the sidelines, RISC-V is sensing significant opportunity to cover the perceived risk faced by traditional ARM users.
Specialised compute power at lower cost
With AI becoming mainstream, the need to dis-integrate the monolithic architecture of the processor is being felt by industry leaders. AMD took initial steps and brought the concept of chiplets in its flagship third generation Ryzen series. It has split I/O and DRAM controller into one functional block and contained CPUs and L3 cache in individual chiplets. Chiplets create additional lever whereby specific area can be reduced instead of pressurising chipmakers to spread their efforts on reducing the area for complete monolithic chip. It also brings the possibility of exploring composite semiconductor material to reduce overall cost.
Era of specialised chips
The consolidation is just the tip of the iceberg. The real game is the emergence of the next big semiconductor player that can capture the emerging AI market, which requires significant compute but in bursts instead of consistently higher compute load. This market is special in the sense it demands unprecedented power efficiency with the requirement to be always-on sensors. Traditionally software or hardware integrator companies such as Facebook, Amazon, and Google are gearing up to exploit this inefficiency of traditional semiconductor chipmakers by investing in chips aimed at solve AI inference issues.
Compute is fast-moving resource for AI
Just as FMCG goods are bread and buffer to regular needs, compute is the same for AI inference. There can never be enough compute power to satisfy the precision challenges in AI. There are software players who are trying to bridge the gap in compute demand by creating innovative model whereby they lease smartphone compute power during night time and run workloads. Amazon is pitching its chip Inferential as custom offering combining Chip, Software, and Server. It claims to provide low-latency for real-time compute and supports almost any ML framework available in the market. These are two key challenges from product creation and ML research perspective.
Google is going a step further and using AI to design the best chips for ML world as part of its research project Apollo. One of the pain point is place and route in which chip designers try to determine the best layout for chip based on its operations. Using ML, they can perform architecture exploration to find better opportunities for performance improvement. As per benchmarking, their process can complete design in less than 6 hours v/s usual weeks taken by experts using software to figure out best layout. IBM is using AI to find the next breakthrough material by running zillions of simulations mixing various materials and studying their physical and chemical properties using the Watson engine.
The future is exciting for computing innovators with a lot of innovation in design, software solutions, usage models, and AI adoption is about to happen. Whether the next decade sees the emergence of new semiconductor power or the consolidation of existing powers to create big power centres is yet to fold.
About the author
Prasant is a marketing leader preparing businesses with long-term strategic technology choices to harness Digital and AI’s power to achieve their business outcome in the post-pandemic world. He has worked with leading global Semiconductor and Software services companies across various strategy, marketing, and product management roles. He is a champion of initiatives to widen technology enablement by simplifying technology choices for business buyers. He is looking to forge the right technology partnerships to create a win-win for enterprises. Prasant is an alum of IIM Ahmedabad and BITS-Pilani, and Ex-STMicroelectronics, Samsung, and Solarflare Communications(Now Xilinx)