Server room temperature is one of the most important metrics in any data center environment. Keeping the equipment inside at a consistent temperature and humidity is a critical part of a facility manager’s job. So what is the maximum server room temperature?
It is essential for data center managers to stay informed of the guidelines they should be following in keeping their facilities in tip-top shape. A disparity of 1°C or 2°C in either direction may not seem like much, but it can make a huge difference in the long run.
What is the maximum temperature a server room can comfortably operate at? What is the minimum? The answers to these questions will give IT managers an idea of the accepted range at which they can keep their facilities.
The American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) is the body that governs the standard for the accepted air temperature and humidity ranges for data center environments. ASHRAE Technical Committee 9.9 determined in 2011 that a class A1 data center should maintain a temperature between 59°F and 89.6°F, as well as the relative humidity of RH 20% to RH 80%. ASHRAE recommends that facilities should not exceed these guidelines.
Servers should be kept between 89.6°F and 59°F.
Evolving Requirements And Possibilities
ASHRAE guidelines are constantly in flux, having changed in 2004, 2008, and 2011. It helps for data center managers to be well educated and well-read when it comes to these changing requirements. For instance, in 2015, ASHRAE was looking at widening the relative-humidity envelope for data centers to support increased efficiency.
There are some facilities that are taking the server room temperature beyond the currently accepted envelope. For example, Google Belgium’s data center operates at a staggering 95°F. This is helping to curb cooling costs and decrease the environmental impact of these huge computing facilities.
Energy Efficiency of Higher Server Room Temperature
Is the “cooler is better” mentality coming to an end? The industry is investigating how a little tilt of a few degrees might allow IT operations to continue while conserving energy and lowering expenses. Similarly, the temperature of water utilized for cooling is rising.
A view of the server pods in the Microsoft Dublin data center, a design that allows the company to run its servers in temperatures up to 95°F
The server room temperature in most data centers is between 68°F and 72°F, with some as low as 55°F. By decreasing the amount of energy consumed for air conditioning, raising the baseline temperature within the data center can save money. For every degree of upward movement in the setpoint, data center managers are predicted to save 4% in energy expenses. Google, Intel, Sun, and HP are among the corporations praising the benefits of higher data center baseline temperature settings for cost savings.
However, raising the thermostat may result in less time to recover from a cooling failure, and is only recommended for firms that have a thorough awareness of their facility’s cooling circumstances.
Examples of Use Cases
Improved monitoring and airflow control are allowing data center operators to be more aggressive with greater temperatures in real-world circumstances. In other circumstances, increasing the server room temperature allows businesses to run facilities that utilize chillers sparingly or not at all.
Chillers, which are used to chill water, are common in data center cooling systems, but they need a lot of power to run. Many data centers are lowering their reliance on chillers to enhance the energy efficiency of their buildings as the focus on power prices grows.
Can servers, on the other hand, withstand the heat? According to new statistics, servers are significantly more durable than previously thought. Intel experts conducted a groundbreaking study in which they used just air conditioning to operate a test data center environment. The survey discovered that when temperatures were elevated to 80°F, hardware failure rates were substantially lower than predicted. To compensate for a lack of granular control, many data centers lose energy by over-cooling the data center.
Companies have felt more at ease operating servers in warmer areas as a result of this. Even a Middle Eastern data center user and eBay in Phoenix don’t have chillers. Both Google and Microsoft use fresh air conditioning instead of chillers in their facilities.
Raising the Chiller Water’s Temperature
For facilities using chillers, raising the cooling set point in the data center can allow the manager to also raise the temperature of the water in the chillers, reducing the amount of energy required to cool the water and thus saving energy costs. Facebook retooled the cooling system in one of its existing data centers in Santa Clara, California, and trimmed its annual energy bill by $229,000. While many factors were involved, optimizing the cold aisle and server fan speed allowed Facebook to raise the temperature at the CRAH return from 72°F to 81°F. Thus the higher air temperature precipitated a raised temperature of the supply water coming from its chillers, requiring less energy for refrigeration. The temperature of the chiller water supply was raised by 8°, from 44°F to 52°F.
The Chill Off 2 unintentionally demonstrated the possibility for Clustered Systems to attain even better efficiency in one research. The temperature of the chilled water utilized in the prototype’s cooling distribution unit (CDU) rose from 44°F to 78°F during testing due to a chiller failure. The CPU temperatures in the Clustered Systems climbed throughout the 46-minute cooling interruption, yet the servers continued to run. “Observations during the use of 78°F (25.5°C) chilled water temperature indicate that the Clustered Systems design potentially can be operated with very low-cost cooling water, providing additional energy savings compared to the test results,” according to the Lawrence Berkeley National Laboratory report on the Clustered Systems technology.
Some people are purposefully raising the temperature of the cooling water. For the Swiss Federal Institute of Technology Zurich, IBM deployed a supercomputer cooled by hot water. The Aquasar cooling system employs water at temperatures up to 140°F, which uses up to 40% less energy than an air-cooled machine. The technology also utilizes waste heat to heat university buildings, substantially lowering Aquasar’s carbon impact.
Let AKCP Monitoring Solutions Keep Track
AKCP Thermal Maps
Datacenter monitoring with thermal map sensors helps identify and eliminate hotspots in your cabinets by identifying areas where temperature differential between front and rear are too high. Thermal maps consist of a string of 6 temperature sensors and an optional 2 humidity sensors.
Single Port Temperature and Humidity Sensor
In situations where both temperature and humidity can be critical, you can keep up to speed on the current conditions using this sensor. Combining temperature and humidity into one sensor frees up an additional intelligent sensor port on your base unit.
SNMP Temperature Sensor
The AKCP temperature sensor is compatible with all of our sensorProbe and securityProbe series base units. Designed to record accurate temperature data it is ideal for giving advanced warnings of temperature fluctuations that can potentially damage sensitive equipment. This advance notification can protect your data and systems from catastrophic events.
Why It’s Important?
As previously said, one of the most important variables for IT managers to monitor is data center temperature. When equipment temperatures exceed acceptable limits, unpleasant situations might arise.
When the environment is excessively heated, for example, overheating might occur, resulting in an unplanned server outage.
It’s also possible for servers to be too cool. While this may not cause any server downtime, data center administrators may not want to check their power bills if they’re maintaining their computer rooms at lower temperatures. The impact of keeping servers this cold on the environment is obviously not insignificant. The objective is to maintain the greatest feasible temperature in the facility while ensuring that servers do not overheat.