(By Gary Mitchell)
In their pursuit to connect the unconnected, create more storage for data, achieve higher bandwidth, and faster transmission speeds – hyperscale, colocation, and enterprise data centers are united, yet the challenges hyperscale networks face are entirely unique.
Arguably the biggest challenge for hyperscalers is continuity and, by association, reliability. According to Business Insider, a single minute of downtime for an enterprise data center costs the business almost $9,000. New findings generated by a survey from Uptime Institute revealed that over 10% of all respondents said their most recent reportable outage cost them more than $1m in direct and indirect costs. On March 13th, 2019, Facebook suffered its worst-ever outage, affecting the estimated 2.7 billion users of its core social network, Instagram and messaging applications, Facebook Messenger, and WhatsApp. By extrapolating the company’s 2018 revenue figures, CCN estimated that the blackout could have cost Facebook up to $90 million in lost revenue based on an income of $106,700 per minute. With so many businesses relying on hyperscale data centers to provide the IT backbone to their operations, any downtime can have a substantial impact and sometimes catastrophic ramifications.
So how do hyperscalers ensure the uptime of millions of servers? Resiliency and redundancy provide a safety net, offering a back-up plan in the event of outages, preventing a disruption in service. Power is backed up with two or more power deliveries into the facility and distributed redundancy is also key with data replicated in two or more zones. All zones are separate from one another meaning the data is unaffected by common disaster.
Another challenge facing hyperscale data centers is security and, as a result, customer confidence. Data breaches and other cyberattacks are a growing threat for businesses, making security one of the main considerations when selecting a data center provider. From the geographical location of a data center and security systems such as biometric identification, physical security and compliance provide the first line of defense against potentially costly threats. In the data hall, virtual security measures such as heavy data encryption, log auditing and access dependent on clearance levels are utilized to protect against both internal and external attacks. In hyperscale data centers, all activity is monitored with any anomalies or attempts to compromise communications reported. Servers are virtualized and workloads are managed without reference to specific hardware and mapped at the last moment. Globally, the impact of a data breach on an organization averages at $3.86 million, with the US holding the largest average cost of $7.9 million.
Customer confidence is of huge importance to hyperscalers as they aim to convince customers that their confidential data, and the data of their customers, is in safe hands. Data breaches have more than just a financial impact, a security attack creates a lack of consumer trust, damaging the hyperscale provider’s image, integrity, and reliability.
Long-term growth often requires physical network expansion, this can be achieved by building up or building out. Building out means acquiring or leasing land for future builds, often where development sites are limited. To ensure low-latency in densely populated areas where land is either extremely expensive or non-existent, the only option is to build up, adding floors to a new or pre-existing building. Hyperscalers need suppliers with a global presence as they need to be serviced everywhere, consistently.
Decisions may be made regionally and even globally, but installation has to take place worldwide meaning expert, local support is needed in these areas. Hyperscalers also require suppliers who can build specific variants for them, often adapted to meet local requirements or regulations. Individual country requirements, such as those concerning Construction Product Regulation (CPR) can cause roadblocks for hyperscale operators meaning suppliers need to be versatile and equipped to ease these pain points.
The constant technological advancement is also something that hyperscalers are subject to and affected by. Operating out-of-date technology consumes more space, power and time meaning there is always a rolling replenishment. According to Moore’s Law, the number of transistors in a chip doubles every 18 months, thus doubling the bandwidth and processing capacity. While most observers believe Moore’s Law in its original form is coming up against physical limits, innovation in chip functional design and software methods continue to drive dramatic forced evolution of computing. This requires hyperscalers to renew their technology every 3 to 4 years. This means that entering the hyperscale market requires sizeable capital expenditure and continual investment in new technology. The entrance fee to compete with the big players in the market is astronomical and the pace of change too much for many. No rest for the weary; the race continues.
In the last 20 years, global IP traffic has gone from 12 petabytes per month to a mammoth 156,000 petabytes (156 exabytes) and doesn’t show signs of stopping with a prediction of 396,000 petabytes per month by 2022, an increase of over 150%.
There are now 63,000 searches made per second on Google with the average person conducting 3-4 searches every single day.
This translates into 3.8 million searches per minute, 228 million searches per hour, 5.6 billion searches per day, and at least 2 trillion searches per year. In 2013, Google went dark for a few minutes causing global Internet traffic to drop by an astounding 40%. Source: Cisco Global Cloud Index: Forecast and Methodology, 2016-2021 White Paper
The rapid growth of hyperscale data centers is dependent on the strength of their supply chain. From intermittent demand to the need for rapid technological innovation, many suppliers have serious difficulty addressing the needs of hyperscalers, meaning suppliers that want to be involved must rethink their approach to manufacturing, sales, and research and development.
In relation to sales, suppliers must be able to support margin reduction as hyperscale is a game of economics where large-scale manufacturing comes hand-in-hand with cost reduction. This has often led to channel disintermediation where the hyperscaler, as an end-user, needs to see tangible value-added services from the distribution layer.
Hyperscalers require rapid innovation and often act in advance of industry standards, meaning suppliers need to act in accordance with best practice and thought leadership, while also working within an efficient framework that means innovation remains economically viable.
For suppliers to consistently be able to support the growth of hyperscale operators from a manufacturing perspective, they must be equipped to accommodate inaccurate demand forecasts that can grow or disappear at a moment’s notice. This, in turn, impacts manufacturing as they need to be able to fulfill orders where deadlines are often tight and fixed.
Hyperscalers won’t often change deadlines, but they will change suppliers. Expertise and experience have never been as important as they are right now. Hyperscale data centers expect to receive what they need when they need it. They require consistent products, reliable performance, and a partner that understands their business and challenges, helping them deliver their services and create a more connected world.
(The author is the Marketing Head at AFL Hyperscale)
If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]