The density and power requirements of data centers are increasing. As a result, increased backup power is required. However, data centers have a finite amount of space. Therefore a small, but powerful uninterruptible power supply (UPS) should be able to solve the problem. The UPS will provide temporary battery power in the event of a power outage. It is only designed for a short period of time until the backup generators have started up and are providing a stable power supply.
However, data centers must consider cooling, redundancy, efficiency, and other factors when selecting the right UPS. Small and medium-sized data centers are being shaped by new applications such as high-performance computing and artificial intelligence. Efficient operation is required as energy prices rise and budgets tighten. Putting more IT resources into each rack is one way to cut costs, save space, and reduce energy consumption.
However, this poses problems. If the power capacity of the deployed UPS does not keep up with the new IT equipment it supports, additional UPS system requirements may result in floor space constraints. The capabilities of a UPS system are also influenced by the architecture of the data center. As a result, selecting a backup power solution for high-density applications necessitates the consideration of data center and UPS technologies.
HPC, AI, and Density
Photo Credit: datacenterfrontier.com
Datacenter power infrastructure is being forced to adapt as a result of big data analytics, machine learning, and AI. To handle these new workloads, high-performance computing (HPC) platforms include GPU and CPU works in tandem. This could imply new racks of high-end GPU capable of performing floating-point calculations for applications like medical diagnostics. With HPC, per-rack density can reach 20-30 kW and higher. In 451 Research‘s survey “The Infrastructure Imperative,” more than half of the respondents said they were running high-density HPC data center infrastructure.
Converged And Hyper-Converged Infrastructure
Large rack-scale platforms that combine compute, storage, and networking as a complete solution are commonly referred to as CI. HCI, on the other hand, comes in 1U and 2U rack units with a multi-core server and a local storage array. The main architectural distinction is that storage in CI is attached directly to the physical server. HCI, on the other hand, distributes storage among all virtual machines (VM).
Hyper-converged infrastructure is now used in nearly half of data centers. A power loss in these virtualized, high-density IT infrastructures can have a far broader impact than in a non-converged environment. As a result, the UPS durability, as well as capabilities like +1 redundancy operation, are critical in these cases.
Power As A Major Cause of Downtime
Photo Credit: www.insight.com
By a large margin, power outages are the most common cause of data center outages. According to research by the Uptime Institute, power disruptions account for 37% of all outages, while software and IT systems account for only 22%.
“Creeping criticality,” which occurs when infrastructure crosses the criticality threshold owing to increasing resiliency needs while power infrastructure remains unchanged, is one factor that could be contributing to these disruptions. This type of circumstance can arise as a result of increasing density due to the gradual adoption of more HPC or HCI.
And data center downtime is costly: over 10% of recent incidents resulted in costs of over $1 million. All the more reason to examine UPS requirements.
Mission-Critical Data UPS
Standby, line-interactive, and online are the three main types of uninterruptible power systems (UPS) available today. The first two may have limits in how they fix power quality issues from the supply, such as long switchover times (25ms) in the event of an outage, making them unsuitable for most mission-critical applications in small and medium-sized data centers.
However, an online double-conversion UPS has no switchover time. These devices take AC power from the grid, convert it to DC to charge their batteries, and then convert the DC bus power back to AC to power IT loads. The big advantage here is that power is always coming from the DC bus; no switchover is required if power from the grid is disrupted. Transients and voltage drops are also buffered by the batteries, resulting in highly clean electricity.
Longer run times can be achieved with a UPS system that operates in parallel. It also ensures that power is maintained in the event that one of the UPS units or battery strings in the setup fails. When two UPS systems are running in parallel, it is sometimes referred to as +1 redundancy. An N+1 architecture is defined as many UPS systems deployed in parallel, with N designating the number of UPS systems required to handle the load and the +1 serving as a backup in case that one of the N units fails. A parallel UPS arrangement is advantageous for mission-critical applications in industries such as banking, manufacturing, and healthcare, where downtime can have catastrophic consequences.
Batteries are an important part of any online UPS, but their lifespan is limited. Battery replacement and maintenance are unavoidable at some point. Lithium-ion batteries, on the other hand, require less maintenance, have a better power density, and can easily last twice as long as lead-acid batteries. This means fewer batteries need to be replaced, which saves money—a significant benefit for high-density data centers with restricted budgets.
Bypass control allows the UPS to be disconnected from the power distribution system entirely. In two scenarios, the ability to switch to grid power and remove the UPS is critical: one, the UPS is malfunctioning, and two, it requires maintenance. A bypass allows maintenance to be performed without affecting the load. Without a bypass, an undesirable situation could emerge in which grid power is available but the data center is unavailable due to a malfunctioning downstream UPS.