Five ways Data Centers can become more agile and reliable

Vijay Kumar Mahalingam, Vice President, Technical Services, Rahi Systems

By Vijay Kumar Mahalingam, VP, Technical Services, Rahi Systems

The present pandemic has completely changed the way organisations think about the business continuity process. Enterprises  are rethinking and reworking their business continuity strategy to include  ‘people perspective’, a shift away from only thinking about infrastructure.

Today, end-users are not the only ones working from home. Many IT professionals  are also doing their jobs remotely, which makes it imperative to find ways to manage and maintain data center infrastructure without going on premises. What’s more, remote work puts increased pressure on the IT environment in unpredictable ways. IT teams that planned around specific spikes in demand — such as during the holidays or at month-end — must ensure there are sufficient virtual machine resources and network bandwidth to support mission-critical workloads.

In the current times, when dependency on IT infrastructure is extremely high, there is a need to ensure highly reliable and high-performance infrastructure. IT teams need to ensure that work from home users have ready access to the applications and data they need. Downtime prevention and business continuity are more important than ever.

Optimising the Data Center

To meet these challenges, organisations need a cloud-like data center infrastructure that is scalable and operationally efficient. The modernisation efforts should focus on these five critical areas:

#1 Redundancy: It is inevitable that some data center infrastructure components will fail. For example, according to the Ponemon Institute, UPS failure is the leading cause of unplanned outages. Your data center should have, hence, redundant components, power and connectivity serving all equipment, with automated failover and remote management. It is also important to create enough redundancies so that no single point of failure – be it a server or a network switch can be the cause of any downtime. Enterprises must invest in the right monitoring tools to ensure that any issue is proactively resolved before it can impact the performance of the IT infrastructure. 

#2 Cooling: Today’s high-performance equipment and dense data center designs require much more cooling than traditional architectures. In many cases, IT leaders need to take a fresh look at their cooling infrastructure to ensure that it can handle the load. This is critical as cooling is estimated to be close to 40 per cent of the total energy consumed by a data center. In-row cooling and hot- and cold-aisle containment can help create the right operating environment. Organisations must also consider using AIOps (AI in IT operations) to measure and optimise energy efficiency levels in a better way. 

#3 Power: It is also equally important to ensure adequate power and optimise power loads. This is critical from a cost as well as resilience perspective. Overloading power supplies invites failure while putting a low-density rack on a larger power supply wastes energy. A thorough review of power systems can help in identifying these kinds of issues.  It is also equally important to decommission and replace old inefficient energy IT infrastructure. For example, Gartner says that 60 per cent of the payload power is consumed by servers. Enterprises can reduce this significantly by identifying and eliminating workloads that are not necessary. The power consumption can also be significantly reduced by consolidating virtual machines and replacing old inefficient servers with new energy efficient servers. 

#4 Space: Traditionally, data centers might be expected to have a 15-year lifecycle, but the rapid operational changes required for the Covid-19 pandemic have made it far more difficult to predict demand. Modular data center infrastructure provides the agility and scalability needed to meet changing requirements. Best-in-class solutions offer plug-and-play simplicity for rapid buildout. Enterprises can choose from a variety of deployment models – from on-premise (applications are hosted in a data center within the organisation’s premises), co-location (organisations can rent space, power and cooling for their servers or other hardware in data centers put up by service providers) and cloud (enterprises can choose to host their applications in a public cloud using a pay-per-use model)

# 5 Leveraging automation using AIOps: Today, more than ever, IT operations teams are being asked to manage complex IT infrastructure. In a multi-cloud world, where there is increasing pressure to do more with less, there is a huge demand to improve efficiencies without a proportionate increase in costs. In this environment, AIOps has emerged as a powerful alternative to support today’s highly dynamic and distributed IT infrastructure. AIOps can help IT operations teams in monitoring the huge number of logs and alerts and in correlating insights to actual performance issues which can ultimately impact user experience.  The usage of AIOps is being adapted  rapidly, and organisations are using intelligent insights from AIOps tools to find out the root cause of issues. The rising importance of AIOps is corroborated by research firm, Gartner, which predicts that large enterprise exclusive use of AIOps and digital experience monitoring tools to monitor applications and infrastructure will rise from 5 per cent in 2018 to 30 per cent in 2023.

In conclusion, looking at the criticality of data centers, and the challenges of providing support in a challenging environment, it is extremely important for enterprises to continually focus on initiatives or processes that promise sustained improvement in efficiencies and performance. By focusing on the five important parameters of data center optimisation as mentioned above, enterprises can create data centers that are more resilient to the rapid winds of change. 

Comments (0)
Add Comment