Rack densities are on the rise, and this will continue in the coming years. According to the Uptime Institute study, 69% of their respondents say that their average rack density increased. Space optimization, reduction of operating costs, and demand for Artificial Intelligence (AI) are all drivers of this occurrence. And as density increases, data center management has become ever more complicated. Operators need to align the IT equipment to the capacity of the infrastructure in terms of power, cooling, and space. Losing the capacity of one may impair the whole business’ operation.
Integrated Data Center Management
Data centers deliver critical services to millions of people all around the world. They need to be as efficient and resilient as possible. The key to achieve this is filling the gap between IT and facility management systems. Integrated data center infrastructure management (DCIM) is a strategy that unifies different systems. These are:
Building Management System (BMS) – Access control, video surveillance, fire alarms, HVAC control, programmable lighting. Electric power management is also a component of this system
IT System Management – Hardware, software, database, and network
Data Center Infrastructure Management (DCIM) – Support the center’s hardware and software such as power subsystems, uninterruptible power supplies (UPS). It also includes cooling systems and connections to external networks.
The goal of DCIM is to improve visibility on all the elements within the data center operating space. With the integrated systems, operators can manage the facility and systems using intuitive analytics and reports. It also provides transparency to the impact of power and cooling the equipment.
Light Out Data Center
Photo Credit: www.forbes.com
A lights out data center is a facility with automated systems that operates without the need for staff. With the growing density, data centers need to be optimized with supreme efficiency. At this point, it could no longer be a facility for humans. Lights out data centers allow operators to place workloads without concerns on elements such as space, temperature, and human comfort.
The Covid-19 Pandemic also boosted the use of automated maintenance and monitoring systems in data centers. The need for such technology came to the realization of data center managers. Due to safety protocols, the on-site staff is not able to go to the facility. Thus, they are forced to rely on remote access to check the data center and address any incoming issues. Manual management has become more taxing due to new normal and more dispersed IT teams.
Advantages of Light Out Data Center
A lights out facility can have the following advantages on data center management compared with the traditional sites:
Creativity on the Design – The data halls can be createdfreely without considering human occupants.
More Efficient Cooling – The facilities can be operated with higher temperatures and humidity without causing a business interruption.
Minimize Human Errors – A single error such as mislogging data can cause problems. According to a study by Ponemon Institute, human error is the second-highest cause of data center downtime.
Improved Visibility – It provides visibility to the racks to know if servers are aligned and group together correctly.
However, a data center is still a large facility of critical assets and services. Datacenter management must still be donethoroughly and accordingly. Although it shows advantages over the conventional, it can introduce a greater impact of failure. A problem on one rack can be a problem with the whole workload. Therefore, the Uptime Institute recommends deploying on-site staff ready to address potential problems. For Tier III and IV, there should be at least 2 qualified round-the-clock on-site operators.
Challenges for High-Density Data Center Management
Photo Credit: www.datacenterknowledge.com
Higher Density IT loads mean more kW per square foot on the white space producing higher discharge temperatures. This translates to more challenges for data center managers. Before fully transforming into the high-density data center, managers must be aware of these challenges and make plans on how to address them.
Risk of Downtime
There are several causes of data center downtime and the most common are hotspots. A high-density data center consumes a huge amount of power and therefore exhausts a lot of heat. When the cooling capacity isn’t enough, hotspots may happen. Another potential cause of downtime are water leaks. Liquid cooling has been mainstream for high-density racks. However, failure to properly integrate and monitor the system can cause a water leak.
High-density IT racks stresses the power density capability of modern data centers. Power must be provided at the right time and in the right amount to support IT equipment and processes. If the equipment uses too much power and goes beyond the limit, a circuit breaker trip may happen.
Building an energy-efficient data center may be challenging with high-density racks. Operators need to consider it to realize long-term savings on electricity costs. It can also lead the way to a sustainable facility which is mandated by the government today.
Deploying New IT Equipment
Datacenter deployment is tricky and complex. It often includes managing multiple hardware platforms and technologies. There should be a well-planned deployment strategy for integrating conventional servers, networking equipment, and storage resources. After planning, it needs to be flawlessly executed as well and closely maintained.
A high-density data center contains numerous interconnected servers and networks. This makes them more challenging to track and monitor. However, without monitoring and records operators cannot plan and respond to potential outages.
The most common challenge for high-density data center management is the cooling requirement of the servers. Large processors used in high-performance computing demand specialized cooling strategies. No two data centers are alike and there is no one-size-fits-all cooling solution. Therefore, thermal control must be customized based on the needs of the business applications.
Chilled Water Loop System
Photo Credit: hvactrainingshop.com
For large data centers, a chilled water loop system can be a viable solution. It uses chilled water to cool the air that is distributed by the Computer Room Air Handler (CRAHs). This chilled water is supplied by the chiller plant located around the building. It’s about 6 to 12 °C or 5 to 11 °C, with a temperature difference of six degrees. Any difference that is larger than this will be less thermally efficient. They can provide more constant efficiency due to relative independence to fluctuations of the ambient temperature. It also has pumps and pipes that move the chilled water around the facility. To simplify, we can say that the chiller serves as the heart of the system and pipes as the arteries. This also means that the system won’t function without any of these components.
Controlling Chilled Water Loop System
Consistent monitoring and control of the chilled water loop system will help operators avoid expensive repairs and maintenance. It also ensures that the system is providing adequate cooling. This is to maintain the optimum environment in the space. By using monitoring tools such as sensors, the chiller can work better in cooling the data center. For instance, rack temperature monitoring has control when to adjust the CRAH fan speed and match it with the IT load.
Wireless Rack Temperature Monitoring
Datacenter monitoring with thermal map sensors helps identify and eliminate hotspots in your cabinets by
Wireless Thermal Maps
identifying areas where temperature differential between front and rear are too high. Thermal maps consist of a string of 6 temperature sensors and an optional 2 humidity sensors. Pre-wired to be easily installed in your cabinet, they are placed at the top, middle, and bottom – front and rear of the cabinet. This configuration of sensors monitors the air intake and exhaust temperatures of your cabinet, as well as the temperature differential from the front to the rear.
As fans age, or fail, the airflow over the IT equipment will lessen. This leads to higher temperature differentials between the front and rear.
Insufficient pressure differential to pull air through the cabinet
When there is an insufficient pressure differential between the front and rear of the cabinet, airflow will be less. The less cold air flowing through the cabinet, the higher the temperature differential front to rear will become.
Power Usage Effectiveness (PUE)
When the data is combined with the power consumption from the in-line power meter you can safely make adjustments in the data center cooling systems, without compromising your equipment, while instantly seeing the changes in your PUE numbers.
Air Handling Unit Monitoring
Air Handling Unit Monitoring with Wireless Tunnel Sensors
In chilled water cooling systems having sufficient water in the cooling tower is essential. With wireless tank depth pressure sensors, you can easily monitor and be alerted if the water level drops below the required levels.
Flow meters can be installed to check for water loss, ensuring inflow and outflow are equal.
Differential Air Pressure Sensor installed on the Air Handling Unit Filter. When pressure drop across the filter is high filters are dirty and require maintenance. Sensors are wireless with 10-year battery life.
Case Study: AKCP Monitors The Lubbock 911 Call Centers
AKCP, the world's oldest and largest supplier of networked wired and wireless sensor solutions, has supplied a monitoring system for the Lubbock, Texas, 9-1-1 call centers.
The Lubbock Emergency Communication District is responsible for providing 24/7 call availability too all 9-1-1 call centers in the county. The data center, located at the Lubbock emergency communication district, receives all 9-1-1 calls. These are distributed to one of 9 public safety answering points throughout the county. The data center also serves as the connection for SMS to 9-1-1 services.