In today’s digital era, data centers have become an indispensable part of the world. Whether for individuals, businesses, city or countries, data centers play a significant role. The reliance on online information and daily use of the Internet of things (IoT) initiates the growth of high-density data centers. As the number of servers multiplies in the facility, energy consumption also increases. From the rack power densities of 3 to 5 kW from the past 15 years, it has grown to 100 kW and continues to do so which demands a more efficient cooling system.
Data Center Cooling
Cooling units have been one of the core components of a reliable data center. It counters the exhaust heat creating a favorable environment for the equipment. Most of the management’s effort goes toward efficient cooling. According to research, almost 40% of the operating expense goes to cooling. Recent market analysis also projected a 3% annual growth rate between 2020 and 2025.
Failure to provide adequate cooling can lead to hotspots, overheating, and hot and cold air mixing. These are the primary causes of data center downtime in which consequences interrupt business’ operations.
Using chillers and computer room air conditioning (CRAC) units was the simplest solution in increasing cooling efficiency. However, as the data center, IT loads increase every year, several other solutions and devices are brought in to secure the future of data center cooling.
Hot and Cold Aisle Layout
One of the first methods used in data centers is IBMs hot and cold aisle layout in 1992. This method can contribute to cooling savings of up to 35%. Raised floors, cooling units, and proper arrangement of racks are the prerequisites. It mainly focuses on separating the hot and cold air into two different aisles.
Hot and cold aisle layout lines up server racks in rows with cold air facing one way. At the same time, the exhaust air meets the other on the next row. This results in one aisle of the row cold (supply) and the other hot (return). As the air moves into the servers, it is heated and exhausted into the hot aisle back into the cooling units. The cooling units are placed around the room or at the end of the racks, ensuring proper air delivery. The units run the conditioned air into the subfloor pushing the cool air above the floor tiles.
This layout will be best for expanding or building new data centers. For existing data centers, it could be costly retrofitting the facility with a new layout. It can also cause downtime as equipment needs to be turned off before moving the racks.
As the rack power densities climbed up to 10 kW, another air cooling method was introduced. Containment systems are used in modern data centers separating the cool intake air and hot exhaust air through a physical barrier. The physical barrier delivers significant energy savings with 40 to 90 percent more efficiency. This configuration, of course, requires the racks to be arranged on the same layout.
Aisle containment can be carried through hot aisle containment, cold aisle containment, or chimney containment. Each of these containment strategies has its application differences and consequences. Nonetheless, all are important in improving efficiency and the future of data center cooling.
- Hot Aisle Containment – isolates the hot air from the server using doors at the end of the racks or ducts from the hot aisle to the cooling units. As the air rises, the barriers provide a clear path for upward hot air through the cooling units. This ensures the warmest possible air return. Both a raised floor and slab environments are workable on this strategy. However, this can make the aisle immensely hot, making it uncomfortable for workers checking it.
- Cold Aisle Containment – This strategy contains the cold aisle making the rest of the room a large hot air return plenum. Plastic curtain materials, doors at the end of the racks, and partitions on the ceiling are used as barriers. While this is easier to implement, it has more drawbacks compared to the hot aisle containment.
- Chimney Containment – It contains air through a solid metal chimney directing the hot return air to the overhead return plenum. Chimneys are installed on the back of a single rack or system to the ceiling air return plenum. This takes advantage of the natural rise of hot air. Pros include flexible configuration and elimination of hot air from all occupied spaces.
In-Rack Heat Extraction
This solution went mainstream when rack power densities surpassed more than 10 kW. Hot air generated by the servers is extracted inside the racks. This way, it will be eliminated before circulating into the server room. Chillers and compressors are installed into the racks, coming up with a creative and efficient cooling system. This creates a fresh and cool room favorable for heat-sensitive equipment. However, getting a very high computational density per rack will be more challenging.
Power densities continue to grow exponentially to 20 kW in 2018, introducing liquid-cooled configurations in the industry. Air-cooled methods have been used for the longest time, which manifests a significant impact on cooling efficiency. Yet, high energy consumption and space limitations are also encountered.
The liquid is used to remove heat from the air while countering the cons of the air-cooling. This method is more ecological, practical, and scalable. It can be through full immersion cooling and direct-to-chip cooling.
Liquid Immersion Cooling involves using dielectric coolant fluid to collect the heat exhausted by the severs. Although the combination of liquid and electricity is disastrous, this fluid is non-conductive and non-flammable. The hardware is immersed, and heat will transfer to this fluid, turning it into vapor.
Direct-to-chip cooling rejects heat through a pipe. The fluid is pumped through cold plates attached to electronic components. As heat is drawn out, it will be delivered to the cooling units going through the outside atmosphere.
Monitoring the Future
The liquid immersion cooling method might still be to the industry. There are still many gray areas to be tackled and study. However, to protect the investments to make this method possible in a data center, monitoring is still a must.
The temperatures of the server surface, liquid bath, and inlet and outlet coolant inside the coils were recorded and measured by a wireless temperature sensor or K-type thermocouple.
Wireless Pipe Pressure Monitoring
The pressure in the tank was monitored by an automatic pressure relief valve with a pressure sensor. Digital pressure gauge for monitoring all kinds of liquids and gasses. Remote monitoring via the internet, alerts, and alarms when pressures are out of pre-defined parameters. Upgrade existing analog gauges.
Power Monitoring Sensor
The power of the cooling unit (including the pump and the fan) was monitored using a power meter that can monitor and record real-time power consumption. The AKCP Power Monitor Sensor gives vital information and allows you to remotely monitor power eliminating the need for manual power audits as well as providing immediate alerts to potential problems. Power meter readings can also be used with the sensorProbe+ and AKCPro Server lives PUE calculations that analyze the efficiency of power usage in your data center. Data collected over time using the Power Monitor sensor can also be viewed using the built-in graphing tool.
Predicting the future of the data center cooling is beyond human capability. However, by thoroughly analyzing data provided by sensors, this has become workable. Having adaptable monitoring solutions such as those offered by AKCP is a wise investment for data centers. AKCP sensor solutions are most suitable for facilities with the most critical assets. It has remote capabilities for easier monitoring. Features guarantee the best possible status for your data center.
Learn more, message AKCP now at email@example.com!