Connections +
Feature

Taming The Energy Beast

Data Centres Consume More And More Power: With Rising Densities And Increased Hardware Capabilities, Cooling Has Become A Critical Element In Power Saving Strategies.


March 1, 2009  


Print this page

Data centre power and cooling costs are accelerating at a rapid rate. Having been dwarfed by new server spend for years, since 2000 they began to grab an ever increasing share of data centre spend, to the point that last year the money devoted to power and cooling actually exceeded new server spending for the first time. That trend is not likely to change anytime soon.

Today, the peak cooling demand of an enterprise data centre covering 6,100 square kilometres is similar to that of a 61,000-square-kilometres commercial office building. Total annual energy consumption is comparable to that of a 122,000-square-kilometre commercial office building.

Gartner Group warns energy expenditure will emerge as the second-highest operating cost (behind labour) in 70% of data centre facilities worldwide.

“If they are not fully aware of the problem, data centre managers run the risk of doubling their energy costs between 2005 and 2011,” Gartner analyst Rakesh Kumar wrote in a research note released before the 27th annual Gartner Data Center Conference in Las Vegas last December.

Kumar went on to warn that on the assumption that data centre energy costs will continue to double every five years, “they will have increased 1,600% between 2005 and 2025.” According to Gartner, a conventional data centre devotes 35% to as much as 50% of total electrical energy consumption to cooling, compared to 15% in a best-practice “green” data centre.

At a broad level, the transition from an industrial to digital and now to a services-based economy has heightened the importance of secure and available power.

It is no longer a question of preserving investments in expensive industrial hardware: the digital economy ended up transforming critical business data into electronic form, and electronic data is fundamentally dependent on the supply of power.

With power consumption levels on the increase, IT managers need to plan ahead to ensure they can always get the power they need, and that means using it as efficiently as possible.

“Data centre energy usage has grown two to three times in the last three years,” said Joe Oreskovic, regional sales manager for Eaton Power Quality Company. in Toronto, Ont., in a presentation at the recent 2009 BICSI Winter Conference in Orlando, Fla. “Most traditional data centres constructed three to 10 years ago were engineered to accommodate three to five kilowatts per rack, but the new technologies can pack in IT equipment with a power and heat load as high as 30 kilowatts per rack.”

Other factors Oreskovic addressed:

• High-density blade servers and rack mounted storage arrays are being touted as the saviours of today’s space-hungry data centre manager;

• Virtualization of IT applications can increase server utilization from an average of 15% to over 80%;

• Virtualization of disc storage can allow much higher utilization of the attached storage, therefore reducing the number of discs required;

• Data centre consolidation can reduce operations costs, freeing up capital for the acquisition of more hardware, extending power requirements.

“There is a massive movement toward consolidation and virtualization, which brings its own challenges in terms of energy footprint,” says Sreeram Krishnamachari, worldwide director of Green Networking Initiatives for HP. Data centre managers are not in a position to acquire extra space, especially in today’s economy, he says, but at the same time computational capacity requirements continue to rise.

The result: increased density of networking hardware, servers, storage, cabling and heavier power consumption per square kilometre.

Resiliency used to be the number one objective in designing data centres, says Bernard Oegema, Data Centre Consultant with IBM Canada Ltd. “Today, efficiency is becoming the prime consideration, or at least equal with resiliency, thanks to the high density of equipment.”

Oegema says that 20 to 40 kW per rack is more common in enterprise data centres, a figure that goes as high as 85 kW per rack in the supercomputing installations IBM has worked on.

As a result of the amount of heat produced at power densities like these, more efficient responses like liquid cooling, once considered ‘exotic,’ are becoming more common. IBM Canada’s recent deployment of a modular data centre for golf and corporate clothing provider Ash City Worldwide Inc. in Richmond Hill, Ont. provides a case in point.

IBM installed ‘close-coupled in-row’ liquid cooling, where the cooling is placed as close as possible to the heat source, although not actually in contact as is often the case with mainframes and supercomputers.

“We ran the cabling on top of the racks, since there is no raised floor,” Oegema says. “Often when a raised floor is used for cooling, putting the cabling under the floor damps the air flow, reducing the effectiveness of cooling, and making it less efficient. Bad airflow management can increase cost probably as much as 15-20%t, Oegema says. What is more, the racks can overheat, the air conditioning has to work harder, and the lifetime of the equipment is reduced.

Unfortunately, the need for more cabling goes hand-in-hand with the evolving data centre. “More and more servers can be stacked in the cabinets, therefore, more network ports need to be installed to connect the servers,” says Benoit Chevarie, product line manager for Belden. “With sometimes up to 100 connections per server cabinet, it’s very important that the patch panels and patch cords be very dense and that the EDA sub-system be very well organized and maintained to avoid interference with the server cooling system. Inefficiency in the cooling system will have a direct impact on the power consumption in the data centre.”

The cooling equation

According to Tarun Bhasin, server market analyst at IDC Canada in Toronto, the proliferation of multicore and blade servers is playing a huge role in elevating the importance — and the financial draw — of power and cooling.

Virtualization is now a widespread phenomenon, saving costs by reducing server count, enabling consolidation of data centres, increasing availability of critical systems and applications while making disaster recovery easier and faster.

At the same time, virtualization also increases server utilization rates which intensifies power consumption, not by nearly enough to offset the savings it brings in other areas, but enough to ensure that power usage and cooling efficiency have to be taken into account when planning the layout of a new data centre or upgrading/redesigning an existing one.

“We see the energy costs of powering and cooling the equipment continue to increase quickly, and it becomes very critical to increase the airflow in the data centre,” says Charles Newcomb, product manager at Panduit Corp. in Tinley Park, Ill. “Typically you will want a hot aisle/cold aisle layout, but you also need to control where the hot air is going and how it is handled.”

A hot aisle/cold aisle layout can cut equipment cooling costs by as much as 40%. One way of then controlling airflow as Newcomb suggests is to set up a contained system that directs the heated air back to the computer room air conditioner (CRAC) unit.

A side-to-side airflow layout is optimal for switches and routers because it allows for the greatest port density: switches designed around front-to-back airflow can’t be deployed in the same densities because added spacing is required to let the cool air in at the bottom of the switch, upward through the switch and then out the back.

As power and cooling systems manufacturer APC states in its white paper, Cooling Options for Rack Equipment with Side-to- Side Airflow: “This presents a problem with today’s trend of converged data/voice/ video networks. In the past, telephone systems were located separately in small secure rooms, but with the advent of convergence, data, voice
and video equipment are being collocated using standardized rack enclosures.” Another trend driving convergence is Storage Area Networks (SAN) where storage equipment is being utilized with switching devices such as routers. As these trends gain momentum, IT managers find it necessary to combine side-to-side airflow equipment with the traditional front-to-back airflow equipment.

To achieve maximum port density, side-to- side airflow cooling is employed by many switch and router manufacturers. Panduit provides ducting with their cabinets that manages the airflow for the switch based on a hot aisle/cold aisle layout.

Products include a new ducting system that optimizes airflow for top-of-rack switches, which are used in server cabinets where the heat is exceptionally high. The duct will draw air from the cold aisle and keep the switch in the desired temperature range. Installing blanking panels will prevent air from flowing through unused racks, which sometimes allows heated air to mix with cooler air before it reaches the equipment, reducing the efficiency of cooling.

Outside the box

Although servers and storage are the prime heat generators in the data centre, networking equipment and cabling can have a greater influence on power use than is generally recognized. Chevarie echoes Oegema’s warning about the effects of lots of cabling under a raised floor, adding that network performance can be degraded by transmission impairment in the cables or by having too many connections in close proximity.

Chevarie believes that an optimized cabling infrastructure can have a more subtle, but highly positive impact on data centre efficiency. He offers some considerations to help optimize the design:

Current and future requirements: the cabling infrastructure will survive two or three generations of equipment.

Planned network growth: a fast growing network will likely take advantage of a modular cabling infrastructure design where switches are distributed in every row of cabinets (end-of-row).

Frequency of moves, adds and changes: a data centre that sees a lot of movement in server cabinets will benefit from the implementation of Zone Distribution Areas (ZDA), which keeps the network connections outside the server cabinets.

Cooling type: a data centre using in-row cooling will likely use 100% of the mounting rails for servers in order to optimize the return on their investment.

Cost per square foot: if it is high, managers will want to use high-density connectivity systems to save space. In these cases, wall cross-connects would be a good option.

Network cabling can have an indirect impact on data centre power consumption by offering additional bandwidth to enable server virtualization and consolidation. Power distribution and thermal management solutions of the kind that Belden and other vendors manufacture can play a more direct role in reducing power consumption when power distribution is augmented with remote monitoring capabilities, while heat containment solutions can improve the efficiency of cooling.

It is also important to note that data centres will never be served by a single vendor, Krishnamachari says. “The networking equipment will come from one vendor, the servers from another, and so on. So it’s important that all entities can talk to each other in meaningful way — if they can do this, you can share information around metrics like utilization rates. This enables you to make sure that power is not over-provisioned. For example, a security camera that needs 8 watts to operate doesn’t get any more than that.”

Krishnamachari says that broad industry standards are essential to enable this kind of interoperability. One example he cites is the Link Layer Discovery Protocol (LLDP), a vendor-neutral Layer 2 protocol that allows a network device to publish its identity and capabilities to the local network.

LLDP-Media Endpoint Discovery (LLDP-MED) is an LLDP enhancement that, among other things, allows for extended and automated power management of Power over Ethernet end points. “LLDPMED lets us adjust the power usage of an endpoint device to very granular increments, such as 0.1 watt,” Krishnamachari says.

“The drive for power efficiency is having a huge impact on data centres today, and it is shaping what data centres will look like in future,” says IDC’s Bhasin. “Considering that the average lifespan of a data centre is 15 to 20 years, planners and managers will have to factor ‘green’ principles into what they do now. Data centres now are running out of power before they run out of space.”

See also p. 20

Andrew Brooks is a Toronto-based freelance technology writer. He can be reached at ahbrooks@rogers.com.

———

It is no longer a question of preserving investments in expensive industrial hardware: the digital economy ended up transforming critical business data into electronic form, and electronic data is fundamentally dependent on the supply of power.

———

Network cabling can have an indirect impact on data centre power consumption by offering additional bandwidth to enable server virtualization and consolidation.