Cutting data centre energy costs
Despite increasing demands on data centres and the rapidly increasing cost of energy, it is possible to limit the impact of power costs in data centres.
With CIOs under increasing pressure to be more energy efficient, there are several core areas where gains can be found.
1. Consider location, delivery and management models
Cloud computing opens new opportunities for organisations striving to improve energy efficiency. The next-generation data centre is a place where the multiple services that support the business are available as they’re required. Not only do these data centres operate at a higher level of energy efficiency, further savings are realised when cloud providers allow you to adjust capacity and only pay for actual usage.
After establishing the services you want and how you will procure and consume data centre services, the next step is to optimise delivery of applications over the network. This can reduce the number of physical data centres you need to own and operate. Then there’s location. Being able to co-locate your ‘in-house’ IT infrastructure in the same data centre (or at least in the same area) as IT services consumed from third parties can significantly reduce requirements for your network layer. Additionally, look for data centres that can demonstrate alternative power generation/cooling technologies, such as free air cooling (more on that below) and green technologies.
Temperature is another opportunity. The improvements in operating temperature tolerance in IT infrastructure, coupled with advances in data centre cooling, mean it’s possible to run data centres a few degrees warmer. A five per cent increase in temperature can translate into cooling savings upwards of 10 per cent.
2. Virtualise and consolidate
Many servers only utilise between five and 15 per cent of their capacity. Often these devices can be consolidated creating a more environmentally sustainable data centre environment. Virtualisation is encapsulating computing resources and running them on shared physical infrastructure in such a way that each appears to exist in its own separate physical environment. The benefits can be substantial: improved application availability and business continuity, independent of hardware and operating systems.
3. Design a best-practice floor plan
Some examples of accepted best practices in data centre floor plan designs include:
Hot aisle/cold aisle layout: Using this layout, equipment is spared from having hot air recirculated, reducing the risk of an outage through device failure. Also, a common hot aisle provides the ability to contain areas where heat density is high - such as racks with blade servers - and to deal with the heat in a specific manner.
Free air cooling: While the benefits derived from air-side economisers depend greatly on where your data centre is located, the energy savings can be significant. Mechanical cooling, depending on the source, is estimated to consume anywhere from 33 to 40 per cent of a facility's incoming electricity. Designed to accompany or circumvent this process, air-side economisers can bring mother nature into the data centre whenever the ambient conditions are favourable.
Outside air is brought in and distributed via a series of dampers and fans. IT infrastructure ingest the cool air, transfer heat, and expel hot air to the room. Instead of being recirculated and cooled, the exhaust is simply directed outside. If the outside air is particularly cold, the economiser may mix the inlet and exhaust air, ensuring that the resulting air temperature falls within the desired range for the equipment.
Distribution of power across racks: Where possible, balance the watts per rack to within a 10-15 per cent variance. This minimises hotspots and the need for sporadic hot-aisle containment. Often, data centre designers place servers performing related functions together, but the benefit is counteracted by the heat density this may cause.
Minimise underfloor cabling: It’s imperative for organisations with static pressure cooling to minimise or eliminate underfloor cabling. If you must, use conduit, cable trays, and other structured methods for running cabling. This minimises barriers between CRAC units and perforated tiles, resulting in more efficient airflow and optimised cooling system efficiency.
4. Redesign the data centre network
Networking can contribute significantly to energy savings: the deployment of specialist data centre network hardware offers significant benefits over general-purpose network hardware. For example:
- front-to-back airflow to support hot/cold aisle layouts
- higher-efficiency power supplies that dramatically reduce power consumption per port
- convergence functionality to enable the consolidation of multiple devices into a single appliance, which in turn reduces the number of cable runs and improves airflow through the entire data centre
5. Appropriate technology
Product evaluation can no longer be just a price-versus-performance comparison. It’s important to incorporate the total cost of the data centre environment into the calculation, including energy consumption. Look for vendors that have power and cooling at the forefront of their research and development strategies. Select equipment based on life-cycle costs.
6. Information life-cycle management (ILM)
ILM is the application of rigour to the often chaotic and unstructured data stores an organisation maintains. Tiered storage lies at the heart of an ILM implementation. The most important data, or the most performance-critical data, should be placed on the highest-performance and most expensive storage. Take advantage of low-speed and lower energy-consuming devices whenever they can meet the service requirements.
7. Investigate liquid cooling
To meet the challenges of blade servers and high-density computing, more organisations are welcoming liquid cooling systems into their infrastructures. Liquid cooling systems use air or liquid heat exchangers to provide effective cooling and to isolate equipment from the existing heating, ventilation, and air-conditioning system. There are a multitude of approaches available - far too many to discuss in detail here.
8. Power-saving technologies
Direct current (DC)-compatible equipment can have a significant impact on power consumption; however, it can be costly to configure, is not widely available, and is also more expensive than equivalent alternating current options.
At present, data centres perform many conversions between alternating current and direct current. This wastes energy, which is emitted as heat and increases the need for cooling. It’s more efficient to power servers directly from a central DC supply. The Lawrence Berkeley National Laboratory in the US estimates that an organisation may save 10-20 per cent of its energy use by moving to direct current technology.
Australia's first Tier III regional data centre is open for business in Toowoomba, 120...
An ounce of prevention is better than millions of dollars of recovery, so make sure your UPS is...
Green, efficient and clean storage will build our partnership with the planet, not ruin it.