Connections +
Feature

The Future Of Green

Greenfield data centres represent an opportunity to nail down the best facility design from the get-go, but if any mistakes are made the consequences could be dire. You need to tread very carefully.


March 1, 2011  


Print this page

The data centre is at the forefront of two of the biggest forces affecting information and communications technology (ICT): the shift to cloud computing, and skyrocketing energy costs.

Added to this is a growing awareness of the environmental impact of ICT, and pressure for data centres to go green. This is a multifaceted phenomenon; with economic, technological, and cultural tends aligning to ensure that tomorrow’s data centres have the highest environmental standards.

“Many data centres were designed years ago to last decades, but the technology inside of them doesn’t have that lifespan,” says Chris Pratt, strategic initiatives executive, IBM Canada Ltd. “As a result, some of the older concepts around data centre design have put people in a corner.”

This has forced many enterprises to face tough decisions regarding whether to renovate, rebuild, or move processing to the cloud, where outsourcers are able to deliver cutting edge technology.

Not all organizations are up for a wholesale move to the cloud, and given that there are numerous mid-size enterprises that are not ready to invest in state-of-the-art, Greenfield data centres, it is important first to assess legacy options.

“Not everyone needs to start from the ground up,” says Darin Stahl, lead research analyst at Info-Tech Research. In fact, Stahl says that in the past two years many inefficient data centres have already been thoroughly assessed, with investments having been made to render them more efficient and environmentally responsible.

“There has been a lot of looking at facilities and saying ‘This doesn’t make sense,’ with 2008 having been a big year for technology refreshes,” says Stahl. “In fact, project approval and execution went in to 2009 – it didn’t turn off once it was funded.”

However, this means that when it comes to existing data centres much of the low hanging fruit has already been picked. Administrators know the value of optimizing air flow by having closed cabinets with rear venting, and much has been addressed with regard to virtualization and storage consolidation, as well as improving power management and accelerating the hardware refresh cycle.

The challenge is that some of these changes, by increasing computing density and power draw, then put added pressure on enterprises to find more ambitious green solutions for the long haul.

Climbing the tree:

Moving up from the so-called low hanging fruit, and perhaps making the decision to migrate to the cloud, requires a strategic view that includes many stakeholders. Given that power is now a major determinant, the shift to a green data centre – whether proprietary, or as part of an outsourced or co-location agreement – involves input from experts in ICT, facilities, and even policy.

“If there is a change today it is that a growing number of people realize that no one person, or even one company, can build a data centre,” says John Bakowski, a career telecom professional and former president of BICSI. “There are things to consider now that we never would have dreamed of 10 years ago.”

These new concerns have a direct effect on cabling systems, just as newer approaches to cabling can have repercussions for other systems.

“Cabling is a significant factor,” says Bakowski. “More and more people realize that it has to be harmonized as one part of a larger puzzle, because more fiber is going into data centres, with bigger networks. There is a dramatic increase in fiber cabling and pre-terminated copper.”

Where a data centre used to have big boxes with a few cables, it now has smaller boxes drawing greater power and a lot more cables. Often, tied in with this is a converged network strategy that leverages 10 gigabit Ethernet, which is easier to manage and results in lower capital costs than fiber channel. Suddenly, the increased physical presence of cables poses a problem.

“Cables tend to build huge dams for air flow,” says Bakowski. “I have seen some very creative ways to address airflow that can deliver a huge amount in power savings, but you need a professional engineer to oversee the whole project.”

Whether the cables are overhead or in a subfloor, the focus needs to be on how the configuration helps meet the overall objective of a green data centre.

Central to this is an understanding that the facility needs to be not only environmentally sound, it also needs to get the job done. This means using advanced cabling and power options to increase choice from a computing perspective.

“One newer development is a bus duct solution,” says Joe Oreskovic, strategic accounts manager for Eaton Power Quality in Toronto. “Rather than run individual circuits from a panel board, with each rack having a couple of circuits, on top of the racks there can be a bus running the length of the row – you might have a 400 amp or a 600 amp bus, and then pull 30 or 60 amps as needed.”

Putting the power above then improves availability and ease of use, which makes it simpler for IT to meet business objectives.

In some cases this reconfiguration can be part of a data centre conversion, but often it results in people seriously considering going with entirely new infrastructure.

“It is the sheer need for power that is forcing people to build Greenfields,” says Oreskovic. “With the new configuration, if someone needs 10 times more power, all they have to do is change the tap box.”

It is true that within a legacy environment modest renovations can optimize airflow into hot/cold aisles, and innovations like flywheel-based UPS systems can deliver a defensible ROI.

However, once the discussion turns to a complete revamp of power supply, and options like liquid cooling, it is time to consider a brand new, Greenfields facility.

Greenfields forever:

According to The Uptime Institute, a professional services and standards organization that focuses on data centres, operational sustainability requirements increase in direct correlation to construction investments.

The Institute defines four tiers of data centres, with the criteria stressing power redundancy as the main route to high availability. At the top tier all cooling equipment is independent and dual powered – and to get there many experts think liquid cooling is a must, because the power draws are going through the roof.

“10 years ago we had racks with 5 kilowatts of power consumption,” says Pratt “and people said that if you put 15 to 20 kilowatts on a rack it would spontaneously combust. Now with some of the new technology there are installations in Canada with well-utilized, well-virtualized racks able to draw 75 kilowatts.”

Increased power draw does not necessarily represent inefficiency; it can simply be a reflection of the impressive computing capabilities that come off of virtualized servers. The simple fact of the matter; however, is that for this level of sophistication to be environmentally responsible new facilities are often required.

“We are seeing an enormous amount of additional workload with these more efficient units,” says Pratt. “And with this increase we have to allow for weight density, with floor loading then becoming an issue. As well, when you get to 75 kilowatts – there is even discussion of 150 kilowatts – then a data centre simply has to be liquid cooled.”

Liquid cooling can be introduced to just about any data centre, but much like power supply, it makes more sense to have a centralized delivery system, which usually requires a design from the ground up. According to Stahl at Info-Tech, two exciting and complimentary data centre technologies are earth tubes, which are usually air-based, and radiating tubes for liquid cooling.

“Earth tubes are pipes that are put in channels deep into the earth,” he says. “In the summer the air can be cooled this way, and in the winter it can be pre-warmed before bringing it back into the HVAC system.”

Radiating tubes, by comparison, are embedded in the building walls. From there the liquid coolant, a mixture of glycol and water, runs through the tubes to help cool the data centre. “From a facilities perspective, you can also take advantage of flat roofs, and use solar to help get through brown-outs during peak power periods,” says Stahl, who emphasizes that all of this can tie into a larger vision of building automation.

The data centre focus, for which the four tiers offered by The Uptime Institute is a good guide, can share an environmental focus with certifications such as Energy Star for computers and servers, and LEED for facilities.

A data centre is a building, too:

Leadership in Energy and Environmental Design, or “LEED”, is an increasingly popular green building rating system that functions as a third-party certification program for the design, construction and operation of what it calls “high performance green buildings”.

“Every LEED project is a standalone,” says Braden Kurczak, division head for green buildings at Enermodal Engineering in Kitchener, Ont. “You can’t really apply the identical strategies to all buildings, though some approaches make sense for all.”

One home grown example is Bell Canada, which will have a LEED Gold certified data centre up and running in Ottawa by late 2012. According to the company, that will help put it in the top 2% of North American data centres for effective power use. But LEED is only one part of the puzzle.

“The LEED rating system will not deal with the actual operations of the data centre,” cautions Kurczak who, like Bakowski, emphasizes the importance of a holistic view. “Data centres have different energy footprints. To build energy savings we need to know what is driving demand hour by hour, and in a data centre that means taking into account issues like virtualization and free cooling.”

For Kurczak, a Greenfields data centre represents an opportunity to nail down the best facility design from the get-go, but it also includes the risk of making mistakes that could have permanent consequences.

“Given that there is no such thing as a standard practice, and you are only going to design the building once, you have to think in terms of value engineering,” says Kurczak. “What costs are being cut, what are the embodied energy components, what are the agreed-upon metrics for a kilogram of concrete, a ton of steel?”

Some energy-saving building practices can at times appear extreme, even petty, but they simply prove the point that in order to be green and to run as efficiently as possible, today’s data centres have to take everything into account.

“In looking for ways to reduce friction from air flow under a sub floor or around a rack it can come down to what paint you are using,” says IBM’s Pratt. “More people are focusing on incremental changes from both a cost and a green perspective.”

Finding

your community: The message that comes through loud and clear when assessing the merits of a green data centre, whether in the context or a renovation, Greenfields, or choosing a third part provider, is that one person cannot do it all. There are simply too many factors, and too many intersecting areas of expertise. Fortunately, a community of interest is easy to find.

“One thing that is unique to our industry, and that I like, is the extent to which people network,'” says Bakowski. “There are plenty of opportunities to share information.”

The list of influencers can be long. First, there are the power supply people, like Joe Oreskovic from Eaton. Then there are people with specific HVAC expertise, consultants of every stripe, and of course the technology providers. The providers are a great resource, though admittedly it can be hard to get the likes of Cisco, HP, Dell, Foundry and IBM in the same room. Patience is a must.

“I know all the hardware guys and I am sometimes breathless at what the technology can accomplish,” says Bakowski. “They know their stuff, and they also know that if they are running at 20% capacity they are doing quite well. They want to deploy on a much higher usage basis, but they need to address peaks and valleys.”

Now, the new frontier in the data centre is all about data management, monitoring, and analysis. Data centres, no longer single tenant monoliths, have racks with varying densities and power demands.

Building automation can support modular approaches to power management and flexible UPS systems. In this world, going green is a group effort.

“You can refresh a six year old server and see a benefit – a blind squirrel will still find a nut,” says Stahl. “But if you approach green in an ad hoc way you are always leaving money on the table.”

All the stakeholders then benefit from awareness of the concerns of others. The cabling people should be able to discuss air flow and business agility. The hardware folks need to expand their concerns beyond storage, virtualization, and networking, and be ready for a discussion of power costs and floor stress.

And when thinking of water cooling, HVAC experts might want to consider that in the future they may be handling water that has been used to cool processors at the chip level.

As Stahl points out, “if you hit only low hanging fruit, the grief index will rise.”

And as data centres draw more and more power, with price volatility and the risk of power shortages making energy efficiency a critical business objective, going green is no longer a nice-to-have – it’s a must-do. CNS

Tim Wilson is a freelance writer based in Peterborough, Ont. He can be reached via e-mail at tim@twilsonassociates.com