Data centers nowadays can now pack more power in small-scale areas. However, due to various factors of the increasing demand, high-density environments drain the operating budgets. Power consumption is a serious matter for people who manage or engineer a data center. Conserving energy is a big challenge especially with the growing loads. This challenge is not just to simply save, but also to keep an environment-friendly facility. It is also to prevent the governmental scrutiny or charges that will come in the way of too much energy consumption.
Whether small or big data centers can save tens of thousands of dollars with smart choices. Wisely chosen methods, IT hardware, power, and cooling infrastructure can save energy. Energy-efficient cooling systems can also complement this strategy. And a data center with a thousand servers can now save millions of dollars while also reducing carbon footprints.
Read on to learn how to take advantage of this for your data center.
1. Turn Off Idling Motors.
Even if the data center devices become idle, they still consume a significant amount of energy.
2. Servers and Storage Virtualization
Each application is deployed in a dedicated server and storage in the data center. They are inefficiently scattered to multiple systems to maintain or balance demarcation. Each platform, fully utilized or not still consumes the power it requires if at peak load.
To virtualize is to aggregate servers and storage onto a shared platform. This is while maintaining the separation among the operating systems, applications, data, and users. In virtualization, applications can run in separate “virtual machines”. But in truth, they share the same hardware with other applications. This method smartly utilizes the server and power consumption.
Virtualization won’t be the salvation for everyone. Your data center might have to be designed for periodic peak loads. In that case, having underutilized, idle hardware is par for the course. But, virtualization can bring great benefits for most data centers.
3. Consolidation Strategy
Blade servers are most recommended to use in this Strategy. This is because, for a given amount of energy input, you get more processing output. Unlike server racks, blade servers share a common power supply and storage with other blades in the blade chassis.
Blade servers use up to 40% less power. At 10cents per kWh, a data center with 1,000 servers saves up to $175,000 a year.
A second opportunity exists in consolidating storage. There are two ways to reduce energy usage:
Tiered data storage is part of the foundation of information lifecycle management (ILM). This helps companies reduce total storage costs while ensuring compliance and performance. For example, data can be archived to retain regulatory needs. This can also be helpful when data is needed for restoration after an unfortunate corruption or other critical failures.
Since larger disk drives are more efficient, consider consolidating storage to improve utilization and warrant the use of those larger drives. The larger capacity the disk drives, the more efficient it is. Consolidation of storage helps utilize and warrants the use of larger disk drives. Making your data center more efficient.
Organizations can reap great savings from consolidating data centers in one location. bB sharing cooling and backup systems to support big loads – not to mention the space rent savings.
4. Turn on the CPU’s Power-Management Feature.
Almost 50%of the power consumption of a server is used by its central processing unit. Manufacturers, over time, slowly developing energy-efficient chipsets, dual or, quad0core technologies. These energy-saving chipsets are designed to process higher loads with less power. But, there are also other options for reducing CPU power consumption.
There are CPUs that have power-management features. This allows the CPU to optimize its own power consumption by switching into multiple performance states. Which can be possible without having to reset the CPU itself.
When operating at low utilization, it minimizes power consumption. The CPU does this by ratcheting down the processor power state. Adaptive power management reduces power consumption without compromising processing capability.
Many users have purchased servers with this CPU capability, but haven’t enabled it. If you have the feature, turn it up. If you don’t, consider it when making future server purchases.
5. Efficient Power Supply for IT Equipment
Photo Credit: www.techbuyer.com
After the CPU, the second biggest culprit in power consumption is the power supply unit (PSU). This requires about 25 percent of the server’s power budget for that task. The third is the point-of-load (POL) voltage regulators (VRs) that convert the 12V DC into the various DC voltages. Those are required by loads such as processors and chipsets.
Several industry initiatives are improving the efficiency of server components. Though the initial cost of such an efficient power supply unit is higher. But the energy savings repay it.
PDU are already running at their 94% percent efficiency and so the efficiency and power conversion fall into the UPS.
In the early days of computers, UPSs maximum efficiency can go up to 80% only. However, today’s innovation pulled UPS efficiency up to 94% due to the eliminated need for power transformers.
Even small increases in UPS efficiency can quickly translate into thousands of dollars. A 10% less power equals $86,000 for a data center that operates a thousand servers. What’s more, is that UPS can actually its battery runtime and produce cooler operating conditions. Lower temperatures extend the life of components and increase overall reliability and performance.
7. Cooling Best Practices
Up to 60% of the data center’s utility bill goes to cooling systems alone. To anyone may think that this is way too much energy to spend for one function. This is due to inefficiently deployed cooling system equipment and not running them at their recommended conditions.
Your facility might already have some resources in cutting cooling costs, all you have to do is use them accordingly:
Photo Credit: www.futureelectronics.com
Use hot aisle/cold aisle enclosure configurations. By alternating equipment so there is an aisle with a cold air intake and another with a hot air exhaust, you can create a more uniform air temperature.
Use blanking panels inside equipment enclosures so that air from hot aisles doesn’t mix with air from cold aisles.
Seal cable outputs to minimize “bypass airflow,” whereby cool air is short cycling back to cooling units instead of circulating evenly throughout the data center. This phenomenon affects as much as 60 percent of the cool-air supply in computer rooms.
These methods could actually help the data center with 1,000 servers save up to $109,000 annually.
8. Conduct An Energy Audit of your Data Center
IT efficiency shows the efficiency output of IT equipment for given electrical power input. Site infrastructure efficiency shows how much power is diverted into the support system and how much is fuel is needed for that.
This is will be useful to track efficiency over time and to maximize the output while reducing the input power.
The total data-center power is the power required to support IT equipment, power backups, and cooling systems. IT equipment power is the power drawn by all IT equipment in the data center. A practical approximation for the IT equipment power would be the output power from UPSs.
A well-designed, well-operated data center has a recommendation of PUE of 1.6 to PUE of 2 according to Uptime Institute. They come up with as a result of applying this calculation to several data centers.
9. Prioritization to Reduce Power Consumption
After auditing, you can now identify and prioritize the implementation of the opportunities in reducing energy consumption.
The lack of a holistic and system-level approach is making it hard for data managers to progress. Prioritizing opportunities to reduce energy consumptions and tailoring best practices is not an easy task to do without guidelines. So here’s a little guide on how to prioritize and take action to save energy:
Identifying and powering down underutilized equipment.
Increasing equipment utilization through virtualization and consolidation.
Selecting high-efficiency IT equipment.
Upgrading UPSs to higher-efficiency technology.
Implementing energy-efficient practices for cooling.
Adopting power distribution at 208V/230V.
In a greenfield data center, or in a major expansion/upgrade of an existing data center:
Get executive-level sponsorship and form a cross-functional team to develop an energy strategy for IT operations.
Include energy efficiency as a key requirement in design criteria alongside reliability and uptime.
Consider energy efficiency in calculations of the total cost of ownership when selecting new IT, backup power, and cooling equipment.
Evaluate future cost-saving measures using the initial PUE benchmark. PUE and other server room metrics can be tracked over time to help identify server room issues and opportunities for cost savings.
Automatic PUE Calculations.
Calculate PUE values using data polled from numerous devices. A real-time PUE value can be displayed rack by rack or for an entire server room. Power monitoring sensors collect all the data from IT and non-IT loads that are used in this calculation.
Simon Fraser University has constructed a new research data center. It is a 175 rack facility which houses Cedar, the world's 50th largest supercomputer. Construction was recently completed with 107 racks fully populated. AKCP sensor solutions were chosen to monitor the data center and the mechanical room which houses the chilled water plant equipment.
In the initial phase, spot and rope water sensors have been installed in the mechanical room connected with AKCP sensorProbeX+ SNMP enabled base units. The spot and rope water sensors are placed at key points where leaks are most likely to occur. Rope water sensors are laid out along the lengths of pipes or perimeter of an enclosure.