Datacenter administrators make an effort to ensure proper airflow for maximum efficiency. The temperature of air through the IT equipment and cooling units is consistently monitored. Unfortunately, these are not the only areas of concern. The temperature from the cooling unit to the front of servers and air exhaust back to the cooling unit is also critical. These are less commonly known but hold a significant impact on the data center. It is essential to understand the 4 Delta (∆) T’s to run an efficient data center.
What is Delta T?
A server takes in air at a cool temperature. Once inside the server, this air warms by picking up heat produced by the server. The temperature of the exhausted air can be 10 °C to 15 °C higher than when it entered the server. From the hot aisle, air moves back to the cooling units. Heat in the air will then be removed and cooled. Likewise, when that air exits the cooling unit, it can be 10 °C to 15 °C cooler. This temperature difference as the air flows is called Delta (∆) T.
What affects Delta T?
Several factors contribute to the ∆ T. Temperature difference on the supply air and server inlet caused by inadequate airflow or open pathways between the hot and cold aisle. This indicates the presence of hot air recirculation inside the facility. Hot air recirculation (HAR) leads to hotspot problems that may cause an increased operating cost.
Temperature differences of the exhaust and return air can be caused by air leakage problems on the raised floor, excess cooling, and unfiled slots on the rack. These spaces serve as a shortcut to the air from the front of the servers to the hot aisle. As a result, the air is disrupted as it enters the spaces without passing through the servers.
The 4 Delta T’s of a Data Center
As IT equipment consumes electricity, the kilowatts are converted to heat which is added to the airflow. This means that there is a constant relationship between these variables. Cold air from the cooling systems enters the racks, carries away the heat and in the process, the air warms. The ∆T of air through the IT equipment should be around 10°C to 20°C depending on the type of equipment. For instance, a Blade server that traditionally provides higher ∆T, at 20°C ΔT consumes approximately 90 Cubic Feet Per Minute Per kW (CFM / kW). While a “Pizza Box” server at 10°C ΔT consumes 158 CFM/ kW.
There is a simple calculation that can illustrate the difference ∆T can have with different IT equipment.
CFM = (3.16 x Watts) / ΔT
Where :
CFM = cubic feet per minute of airflow through the server
3.16 = factor for density of air at sea level in relation to ⁰F
ΔT = temperature rise of air passing through the server in ⁰F
A Blade server typically runs at higher ∆T. If we consider it consumes 400 Watts of power:
CFM = (3.16 x 400) / 35°F = 36 CFM
A typical pizza box style server runs at a lower ∆T but consumes the same power
CFM = (3.16 x 400) / 20°F = 63 CFM
So although both of these pieces of equipment consume the same power, their CFM cooling requirement is very different. That is why the ∆T is such an important metric. You need to balance the air supply from the cooling system with the demands of the servers. Too much and you waste energy overcooling, too little and servers can overheat.
The second ∆T is the air through the cooling units. To achieve efficiency, the temperature should be the same as the delta across the IT equipment. Additionally, the total volumetric flow (CFM) of conditioned air should also match the total flow rate of the IT equipment. However, this seems to be a challenge in the real world.
The third ∆T is from the IT equipment exhaust to the cooling unit and is associated with the presence of bypass airflow. In most cases, the exhaust air temperature drops as it returns to the cooling units. This is in contrast with efficiency cooling practices of “returning the warmest possible air.”
The last ∆T that requires monitoring is the cooling unit supplied air to the IT equipment intake. The supply air temperature increases as it moves in front of the servers due to the warm air drawn under the floor or in the ductwork. Air from the hot aisle can also move back through the cold aisle. This means energy is being wasted as not all the cooling energy is reaching the servers. The air path or ductwork from the CRAC to the servers should be as short as possible and avoid traveling through hot areas.
Power Usage Effectiveness (PUE)
PUE refers to the measurement of the energy efficiency of a data center. The PUE determines the relationship between the total power of the data center and the power consumed to run the equipment. This aims to eliminate the amount of power that is not used by the equipment. Computing the PUE helps administrators to gain insights into their operational efficiency. The ideal PUE is 1.0, which means that the equipment uses 100% of the power.
Knowing the four delta T, the temperature is the key in starting to solve the data center PUE. If the temperature difference is 25°F (mid-range of temperature mentioned above), it has an estimated PUE of 1.3. This means that there is proper airflow distribution in the data center.
TRY AKCP FREE ONLINE PUE CALCULATOR
Managing Delta T
A carefully planned delta t management strategy is the foundation of an efficient data center. If not, air will follow with the natural dynamics set up by the facility design causing large temperature differences. The ASHRAE “Thermal Guidelines for Data Processing Environments” provides ways to evaluate cooling efficiency and improve it further.
Here are the 4 F’s of Delta T management:
1. Floor Management
Working on the floor is the most common practice on several cooling efficiency methods. The raised floor is built 2-4 inches in height from the concrete floor to create cooling and electrical services. Its underneath should be cleaned often enough to eliminate dirt. Perforated tiles are also implemented as it delivers cold air directly in front of the racks. It is important to place them not too close to the servers.
2. Filling the Racks
IBM believes that using a 19-inch rack will deliver enough airflow through the IT equipment. To avoid air leakage or bypass airflow, blanking panels to occupy the space are recommended. Other fillers available include air containment cubes and brush grommets. Another consideration is the properly tied and routed cables to prevent obstruction of airflow.
3. Forming the Right Layout
The hot and cold aisle layout is one of the first cooling strategies in the industry. This design involves arranging the server racks in adjacent rows with cold air facing one way and hot exhausts facing the other. It is widely used by data centers as this is a prerequisite for succeeding strategies. This layout provides clear air delivery paths by creating a separate aisle for hot and cold air. Thus, ensuring that the IT equipment pulls cooler air into them.
4. Fixing Potential Problems
Preventive maintenance is an essential part of data center management. It includes a thorough check of the cooling system on a regular basis. This identifies issues allowing administrators to fix potential problems before leading to a disaster.
Temperature Sensors in Managing Delta T
Temperature sensors use Artificial Intelligence (AI) to provide measurements on the environmental condition. The data will be then converted into a record for interpretation. These advanced analytics play an important role in Delta T’s management strategy. As Peter Drucker wrote, “What gets measured gets managed.”
Installing temperature sensors on the rack provides insight into the rack level temperature that can be used in measuring the Delta T. For instance, the administrators can know if the supply air temperature is identical to the air intake. The temperature sensor also sends an alert if the temperature exceeds the acceptable delta t range.
However, the quality of the sensor and its location affect the accuracy of data. A sensor must have stability, quick response time, built-in calibration, and flexibility.
As for the location, they should be placed on the strategic points on the rack. They can be placed on the top, bottom, back, and center. The ASHRAE recommends using six or more sensors for the most accurate reading.
AKCP Cabinet Analysis Sensor
The administrators must give the tackled delta t ample importance. In a data center, it is critical to be informed of your parameters. It’s more than monitoring, but also the control.
When looking for a temperature sensor to rely upon, AKCP Cabinet Analysis Sensor (CAS) can be part of your monitoring solutions. The (CAS) features a cabinet thermal map for detecting hot spots and a differential pressure sensor for analysis of airflow.
Differential Temperature (△T)
Cabinet thermal maps consist of 2 strings of 3x Temp and 1x Hum sensor. Monitor the temperature at the front and rear of the cabinet, top, middle, and bottom. The △T value, front to rear temperature differential is calculated and displayed with animated arrows in AKCPro Server cabinet rack map views.

AKCP Cabinet Analysis Sensor
Differential Pressure (△P)
There should always be a positive pressure at the front of the cabinet, to ensure that air from hot and cold aisles is not mixing. Air travels from areas of high pressure to low pressure, it is imperative for efficient cooling to check that there is higher pressure at the front of the cabinet and lower pressure at the rear.
Rack Maps and Containment Views
With an L-DCIM or PC with AKCPro Server installed, dedicated rack maps displaying Cabinet Analysis Sensor data can be configured to give a visual representation of each rack in your data center. If you are running a hot/cold aisle containment, then containment views can also be configured to give a sectional view of your racks and containment aisles.
Power Usage Effectiveness (PUE)
When the data is combined with the power consumption from the in-line power meter you can safely make adjustments in the data center cooling systems, without compromising your equipment, while instantly seeing the changes in your PUE numbers.
Read more about the Delta T monitoring at www.akcp.com. To request more information send you inquiries at [email protected]
Reference Links:
https://www.keyzone.com/what-we-do/cold-aisle-containment
https://www.42u.com/solutions/data-center-containment/cold-aisle-containment/
https://www.packetpower.com/blog/optimizing-data-center-cooling-using-differential-pressure-sensing
https://datacenterbrainstorm.com/the-importance-of-pressure-control-in-cold-aisle-containment-cac/
https://www.datacenterdynamics.com/en/news/server-leakage-and-cooling/