Within the data center, hot-aisle containment encloses the hot aisles. The IT equipment’s discharge air enters the enclosed hot aisle and is directed to cooling equipment via a ceiling plenum or ductwork. The cold supply air enters the room by a raised floor, ducts, or directly into the general space from the AC units. As effective as it is, there are some controversial methods that have been debated for a long time.
Network Switch Airflow
Many large network switches are not designed to accommodate hot/cold aisles. The odd air flows can be forced into traditional front-to-back patterns. By using baffles from rack and cabinet makers, especially when hundreds of cables already end on immovable patch panels. Different network switches have different nonstandard airflow patterns. Therefore, a single baffle design may not fit all. The switches’ extra air resistance may increase fan speed and energy use when put in baffled racks. Expensive switches are housed in standard racks, where they circulate and recirculate their own air.
It is not the larger switches that have cooling issues. Top-of-rack switches are becoming more common as a way to merge large numbers of network connections in server cabinets, and air flows are front-to-back. However, server connections are on the rear while switch network connections are on the front. This leads to switches positioned backward, obstructing their own airflow designs.
Hot aisle and cold aisle containment are features of data center construction methods for so long. It is possible to demand that switch makers follow the same standards as everyone else.
Modern Fire Suppression Designs
In 2013, the NFPA 75 and 76 Fire Protection Standards updated the fire suppression issues brought on by containment. Isolating hot and cold aisles improves cooling and energy efficiency. However, if sprinklers are not installed in every aisle, the air barriers will block them. These barriers must shift in order for gas or water to enter the plant. Using fusible links and heat-shrink panels to address fire safety. But by that time the links melted and the air barriers fell out of the way the fire was probably already blazing. And none of these are recognized by the NFPA modifications.
Cooling containment equipment must release electrically when smoke is detected. It cannot fall where it would be a tripping hazard or obstruct an exit from an aisle. That’s a difficult task. In new data center construction, installing fire protection heads in each aisle is simple, but retrofitting is complicated, expensive, and disruptive.
While these adjustments may reduce energy efficiency, the need for fire safety takes precedence.
Smoke Detection System
What about early-warning systems, such as very early smoke detection alarms and fire alarm aspiration sensing technology, when it comes to fire protection?
VESDA and FAAST are frequently suggested in data center facilities because they detect a developing fire long before the major fire suppression systems are activated, preventing significant damage. However, because most computer room air conditioners discharge high-velocity air, designing and installing these systems is complicated.
Local legislation may place restrictions on VESDA and FAAST, making them impracticable. Consider the new NFPA 75 and 76 regulations. What does it mean when the design calls for containment to drop when smoke is detected? Do the drapes come down as soon as the aspirating system detects a problem? Is it filling pre-action pipelines or starting the countdown to gas release when the main suppression system kicks in?
The aspirating system can sound an alert at various threshold levels to provide an early warning, and if the smoke density continues to rise, suppression can be triggered. But what if local legislation mandates the evacuation of the entire building at the first indication of smoke? Is the early detection system rendered impracticable as a result of this, or does it necessitate a threshold so high that it performs no better than a standard smoke detector? Will you battle or merely forfeit the possible benefits if you come into situations that make an early detection system ineffective?
Wider Operating Temperature Range
The ASHRAE TC 9.9 Thermal Guideline specifies intake temperatures of 27°C for regular servers and 40°C for special hardware classes. The wide range improves the number of hours and days when free cooling is used instead of mechanical refrigeration. Even if there is no free cooling, slightly higher working temperatures save a lot of energy and money.
All of the main manufacturers agreed that equipment, both new and old, can operate at 27°C. The 2011 update to the ASHRAE recommendation revealed that if cooling is correctly handled, the effect of higher temperatures on server life expectancy is low.
Temperature And Noise
Noise and heat exposure has recently been raised as a result of high equipment density and growing operating temperatures. The Occupational Safety and Health Administration (OSHA) in the United States regulates human heat and noise exposure based on a combination of level with exposure length and rest time.
Noise levels have escalated to the point that OSHA requirements must be fulfilled in data centers. Hearing conservation initiatives should be offered to employees, as well as hearing protection. Pay special attention to employees who work in completely enclosed aisles, where noise levels are magnified by resonance within the space.
While in-row and overhead coolers are extremely efficient, noise control is a concern. Employees who spend fewer than four hours each day in a noisy aisle are unlikely to be damaged. Examine loud aisles to see if they pose a threat to your safety.
Despite the opinions of technicians working in today’s heated aisles, temperature poses less of a health danger. Temperature exposure limits are calculated using the Wet Bulb Globe Temperature (WBGT). The temperature of the globe is monitored using a one-of-a-kind thermometer in the center of a 6-inch-diameter black copper spherical, and it is mostly a radiation phenomenon driven by solar angle and wind speed.
WBGT takes humidity into account as well, while humidity levels in data centers should never exceed 60%. This makes calculating a WBGT estimate extremely simple. Let’s look at some worst-case scenarios:
- The maximum inlet temperature of 27°C.
- Maximum dew point temperature of 15°C.
- Temperature delta through the servers of 14°C.
- Resulting hot aisle temperature of 40.6°C.
- Resulting in WBGT of 28.5°C.
The majority of hot aisle labor is wiring, which OSHA classifies as light handwork performed in a standing position while wearing light gear. The maximum permitted WGBT under these conditions is 30°C, hence continued light labor in the hot aisle is legal. Even moderate work activity is permitted for 50% of the time, with 50% of the time being spent resting.
However, OSHA requirements will apply to newer ASHRAE Class 3 and 4 equipment, which can operate at up to 45°C at up to 90% relative humidity. When the input temperature is 40°C, the discharge temperature is likely to be 51.7°C, resulting in a WGBT of 31.7°C. Only light work, and only for 25% of the time, is acceptable at this temperature, with 75% rest. Furthermore, the OSHA maximum safe touch temperature for heated surfaces is 52°C.
Other Common Issues With Hot Aisles
- Because containment and heat extraction systems are required at ceiling height, which is not always possible or cost-effective, it is a more expensive solution.
- The warmer air makes it more difficult for employees to enter and work in certain aisles.
- Depending on ceiling height fire suppression systems may not be able to meet building standards, potentially requiring a re-design to allow them to do so.
AKCP Monitoring Solutions
Monitoring solutions for the hot aisle containment method are available from AKCP.
To visualize the current status of your data center containment aisle, use sectional views. View differential air pressure between the front and back of the racks, thermal maps, ΔT values, and hot and cold aisle confinement values by slicing through your aisle at any rack location. This information, together with power meters, can be utilized to double-check your PUE values and fine-tune your data center for maximum efficiency.