Data centre power planning


By Jim Smith
Friday, 07 October, 2011


Data centre power planning

Demand for data centres continues unabated, driven by factors like cloud computing and increasing use of multi-application phones. These data centres must be efficient if they are to be successful. Jim Smith*, CTO of Digital Realty Trust, discusses how power-usage planning can help build better data centres.

Enterprise-sized companies have an increasing interest in adding data centre capacity. While this rise in demand bodes well for vendors of data centre space, and companies that specialise in their design and construction, it does come with a caveat. The power demands of these new facilities are daunting, and present a significant ongoing cost impact for the occupants of new computing centre.

In 2007, Jonathon Koomey of the Lawrence Berkeley National Laboratory published a report that concluded that data centres consume 2% of all energy consumed in the USA - enough to power every TV in the country.

Yet based on current projections into the near future, these will be regarded as the ‘salad days’ of energy consumption. Estimates by the Gartner Group indicate that energy costs, which historically had made up about 10% of the overall IT budget, could soon exceed 50% of this budget. Thus a critical issue facing data centre designers is what steps must be taken to ensure the delivery of the most energy-efficient facility possible.

Understand the requirements and plan upfront

Although data centre requirements will vary between organisations, they are all defined by a common element - their existing power usage. Until very recently, energy utilisation was viewed as a constant or, from a slightly better perspective, an average. Power requirements were typically expressed as follows: “Our data centre requires ‘x’ kW to support our computing equipment 24/7”.

The first step in designing an energy-efficient data centre is to understand the firm’s logical flow of power usage. Logical flow is simply the mapping and understanding of the daily peaks and valleys of each supported business unit’s or application’s power usage. For example, at Digital Realty Trust we have a number of financial trading platforms as customers. Their kW consumption is not a flat line across the day - there are four significant peak periods where usages spikes: first thing in the morning, immediately before and after the lunch hour, and finally, the period just prior to market close.

Understanding the logical flow of power usage allows the development of more accurate projections on actual data centre power requirements. This ensures that the new facility is designed to support the actual patterns of use without over-purchasing power capacity.

A data centre engineer once described attempting to design a facility without the use of computational fluid dynamics (CFD) modelling as like driving while blindfolded. A key element of the energy efficiency of a data centre is the deployment of the equipment within it. However, waiting until the facility is finished to determine the optimum layout for its components is less like a leap of faith than a formula for disaster.

By using CFD modelling to examine potential configurations prior to construction, multiple potential layouts can be trialled to determine the most efficient option, while allowing problems associated with inadequate airflow or hot spots to be addressed on a computer screen rather than by trial and error after the facility has been completed.

Stick to basic principles

Often we downplay the underlying fundamentals of data centre design, to the detriment of efficiency. The operational efficiency, and hence the energy efficiency, of a data centre is a function of effective power utilisation and heat removal. Ensuring that a data centre is designed to maximise its heat removal capability requires an unobstructed pathway from the cool air source to the server intakes. This cool air pathway must be coupled with a similar path for the flow of server-generated hot air to the return ducts of the data centre facility’s CRAC/H units.

The overarching goal in developing an energy-efficient data centre is to remove obstacles to effective airflow and cooling capability. Among the elements required to achieve this objective are:

1. Hot aisle/cold aisle

Using a hot aisle/cold aisle configuration that places equipment racks in alternating cold (rack air intake side) and hot (rack air heat exhaust side) aisle ways is an effective way to balance the hot and cold air input and output within a facility. This design allows the hot aisles to act as heat exhausts and the cooling system to supply cold air only to designated cold aisles.

2. Operating temperature

Operating a data centre at the proper temperature can dramatically decrease power consumption and electrical bills. Contrary to historical belief, the temperature of a data centre shouldn’t be the equivalent of the average meat locker. Current ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) specifications specify a data centre operating temperature of 72°F (22°C) as opposed to 68°F (20°C). Although four degrees seems like a small variance, when considering space requirements multiplied by a 24/7, 365-day operating environment, the decrease in power usage can deliver cost savings ranging from thousands to tens of thousands a year.

3. Build-out incrementally

Units like generators and UPS systems are designed to operate at peak efficiency when performing to their maximum design conditions. Purchasing a larger-sized component under the premise that a data centre will grow into it guarantees that the unit will not be operating at its maximum level of efficiency. In one respect, it is an understandable error. In many instances a key element of the design criteria is to ensure that some past catastrophic event will never happen again. So building a data centre incrementally ensures that all components are right sized for the space and are using their power most efficiently.

While diminishing risk is a key element of the criteria for facility design, it is possible to carry this requirement too far. Designing for a worst-case scenario often results in over-engineered solutions. Along with the use of components that are oversized and under utilised and actually drive up their power needs, these facilities suffer from a law of diminishing returns. For example, a facility designed for a 90% margin over five layers of infrastructure results in an overall margin of only 60%!

4. Focus on flooring

Proper distribution of the perforated tiles of a raised floor is a simple yet effective way to reduce the heat in a data centre, as well as the load on cooling components. Ensuring that a floor is properly sealed and perforated tiles are not blocked or covered by equipment, increases the overall flow of cool air throughout the data centre. By maximising the airflow within a facility, HVAC components can operate more efficiently without requiring excessive power input.

New technical alternatives

Although designing data centres to maximise their energy efficiency is still more a function of basic blocking and tackling, some technical options may be considered during the design process. The first may be found with UPS systems. More and more providers are offering these components with an ‘economisation’ mode. In this setting, critical loads essentially run on static bypass. The systems include controls to sense, and quickly switch to full UPS protection in the event of an input power anomaly. At present, this technology is still in the early adoption phase as it presents a trade-off of slightly higher resiliency risk for lower ongoing operational costs.

Many enterprise users are not yet convinced that the reward outweighs the risk. Further study and documented evidence as to the stability of this mode of UPS operation may ultimately make this a much more attractive energy-efficiency option.

Air-side economisation is also beginning to scale the barriers of opposition to its use as an efficient mode of reducing data centre energy requirements. Perhaps the most important consideration for this technology is geographic. Well suited for use in temperate, moist climates, the use of outside air takes advantage of the cooling effect of the environmental temperatures themselves to limit the need (or use of) other cooling technologies like water-cooled chilled water.

Clearly some parts of Australia are unlikely to be optimised geography for cooling a facility with outside air. There are also concerns for the potential degrading effects of particulate matter which represents an obstacle that some firms, particularly in the financial services industry, view as anathema to the level of reliability they strive to achieve.

*Jim Smith oversees data centre development, Digital Realty Trust’s efficiency and green strategy, and power procurement and energy management. Over the past four years, Smith and the team have delivered more than 500 MW of UPS capacity on over 60 data centre projects worldwide. He has an MBA from London Business School, and was named an InfoWorld top 25 CTO of 2008.

Images courtesy of iStockphoto

Related Articles

Revolutionising connectivity: the trends redefining data centres in 2024

The rush of generative AI has hit the IT ecosystem hard.

Five key data trends Australian IT leaders need to know about this year

With zettabytes of data freely available at our fingertips, businesses must look inwards and...

Future-proofing digital growth in the cloud

As companies move into 2024, many will grapple with the best approach to unlocking the full...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd