Connections +
Feature

The Changing Data Centre

Trade costs have skyrocketed, virtualization is driving up energy consumption and generating more heat, certification requirements are increasingly stringent, permit requirements more prolific, and uptime expectations higher.


July 1, 2012  


Print this page

Economic uncertainty is doing nothing to stop the onslaught of data that organizations will need to manage; or the investments that service providers are making in top-of-the-line data centres.

“Despite the downturn organizations are still building and renovating or upgrading their data centres,” says Matt Stansberry, New York-based Uptime Institute’s director of content and publications. “Demand doesn’t stop when the economy slows down.”

A big reason for that is the ever-escalating data boom. In a recent survey, ITBusinessEdge.com found that 28% of businesses said they expect data to grow faster than 25% this year. Since it is unlikely any IT department can justify that kind of budget increase, it is easy to understand why there is also a growing movement to infrastructure outsourcing and/or cloud-based services.

There has definitely been an uptick in the demand for outsourcing data centre services, notes Ron Ethier, vice president, data centres and managed services for Primus Telecommunications Canada Inc. in Toronto. “A lot of that is being driven by the fact there are a number of aging data centres in North American enterprise portfolios that are in need of refurbishing or rebuilding.”

This time around however, organizations that have kept data centre resources under their own wing are finding that getting up to speed is a much different ball game.

Trade costs have skyrocketed, virtualization is driving up energy consumption and generating more heat, certification requirements are increasingly stringent, permit requirements more prolific, and uptime expectations higher.

“There are also a lot more rules and regulations around access and safety, and everything has become so highly specialized making it difficult to find and retain qualified staff,” Ethier says. “This has made running your own data centre more cumbersome.”

The time factor is also a hurdle of mammoth proportions, he adds. “Facilities aren’t developing as quickly as IT infrastructures. It takes an organization two to three years to complete an upgrade. In the meantime, data demands are escalating, and organizations simply can’t add 25% capacity in 60 days.”

Given all these factors, it is no surprise that more and more organizations are migrating to multi-tenant facilities. But expectations are extremely high indeed, Ethier says. “They have to operate to the highest standards available.”

The certification conundrum: Primus Business Services’ newest project in Markham, Ont. is the ideal case study in what a top tier data centre should have. This particular facility has been awarded Uptime Institute’s Tier III Certification for Design Documents, making it the only multi-tenant data centre in Canada to achieve the certification for both design and construction.

Tier III certification is one a veritable catalogue of certifications that today’s data centres need to add to their selling proposition. Table stakes can also include compliance with PCI (payment card industry) and SSAE (Statement on Standards for Attestation Engagements) standards, along with SAS (Statement on Auditing Standards), SOC (Service Organization Control), BICSI (Building Industry Consulting Service International) and ISO (International Standards Organization).

For many organizations, certifications are too expensive to get on their own, and the complexities are simply getting beyond in-house capabilities, notes Spencer Rasmussen, director of facility services at Tenzing an ecommerce and cloud services provider based in Toronto. “Today clients require multi-layered apps, multiple tiers, and strong security offerings. And uptime expectations are pretty much 100% across the board now.”

Mixing the models: Given fluctuations in business cycles, organizations are also looking for more flexible, modular approaches to data centre planning, says Duncan Campbell vice president, worldwide marketing, converged infrastructure for HP in Cupertino, Calif. “They don’t want to be shackled to old processes.”

Modular containers or data-centre-in-a-box-type systems are increasingly popular add-ons in industries where demand ebbs and flows (e.g. film animation). Many operations are also looking to hybrid models, in which resources are managed by different tiers depending on availability needs.

“Not every data centre has to be at the maximum tier,” Campbell explains. “Today organizations are picking their tiers based on the mission criticality of their applications. You could use a combination of data centres, private cloud and public cloud, depending on your needs. The whole point is how dynamic you can be and how workloads can be moved with different delivery mechanisms.”

The biggest challenge in data centre management today is the fact that there are more moving parts than ever. “Five years ago the right hand wasn’t talking to the left,” says Joe Oreskovic, strategic accounts manager for Eaton Power Quality Company in Toronto. “IT and network individuals were putting in what made sense from a networking perspective, leaving facilities managers struggling with how to support it.”

Now it is a different approach, he says.  “If you are having a rack discussion, power and cooling are not far behind.”

The tide is definitely shifting in more ways than one, says Chris Willis, Americas senior director of cloud for Hitachi Data Systems in Toronto. “In the past, server provisioning and management was the bottleneck. Now it is power. In other words, the bottleneck has moved on to a different group. That has been happening since the mainframe days. Now organizations are just paying someone else to take over responsibility for the complexities so they don’t need to worry about power limitations, rack space or cooling.”

The power factor: In today’s world, the lion’s share of operating expense is energy costs. “Salaries and service contracts pale in comparison,” says Ian Seaton, global technology manager for Chatsworth Products, Inc. (CPI) in Westlake Village, Calif. “Controlling energy costs has become a competitive business initiative and is the key to maintaining margins and business survival.”

According to Oreskovic, in 2004, infrastructure became more expensive than servers. By 2008, energy also exceeded the cost of servers. Much of that is attributable to virtualization and associated cloud topologies, he says. “What we’re ending up with is a trend towards higher per sq. ft. density and higher per rack density.”
While density may be more efficient, it creates issues on several fronts, Oreskovic explains.  “Yes you can get 8,000 times more computing power for about a 30% increase in energy use. But that energy is given off as heat. Then you have to pay money to rid of it.”

He equates the heat generation of one blade server to cranking a stove on high then trying to cool it down. “That’s occurring 100 times over in a data centre. It’s like putting two or three stoves into each rack.”

Cogeco Data Services’ latest data centre expansion in the downtown Toronto market is a rarity for that very reason. “The fact is power is a key issue in the downtown area,” says Tony Ciciretto, president of Cogeco Data Services, Toronto. In Cogeco’s case, the new space will be running on a closed loop system which will drive very efficient power utility numbers.

“Servers processing more data consume more electricity,” confirms Stew Munns, CPI’s national sales manager for Canada in Vaughan, Ont. who estimates that cooling systems account for 30-40% of energy consumption in a data centre.

The push is on for designs to capture hot exhaust air and channel it directly to cooling systems. Options often considered include hot aisle/cold aisle containment systems, chimneys cabinets (aka vertical exhaust ducts) and air dams to prevent air leaks at openings.

“It’s all about air isolation,” Munns notes. “Newer designs no longer exhaust hot air into the room so you can keep things at a homogeneous temperature and your air conditioning runs far more efficiently.”

Chasing down the cable runs: Ethier notes that while there have not been any huge changes to the essentials of cable plants, there is a lot going on with what is being put inside cage spaces. “A lot of customers for example are coming in with pre-terminated-type technology and fiber, so when we deliver services to a customer that has 10 cabinets in a cage for example, they may only need two cables.”

There is also a growing focus on keeping cable pathways clean. It is not just about good housekeeping and easier maintenance, cabling can directly impact heating and cooling efficiencies, Munns explains. Overhead network cabling and modular busways for example can play a big part in avoiding air flow restriction under floors.

Having been in the service provider side of things for some time, Rasmussen says he always goes with overhead cabling. “We like to keep as little under the floor as possible, to eliminate the risk of damages from air conditioning leaks or condensation.”

An interesting trend he has his eye on is converged fabrics to reduce density of cabling in racks. “It simplifies everything from managing to the cabling installation itself.”

HP’s Virtual Connect capability is another technology designed to address the increasing shift to virtualized environments. By using server and virtual machine connections to simplify storage, server and network management, it reduces cabling and energy costs.

In legacy environments, newer connectors enable data centres to condense equipment components while creating more space, says Jacob Macias, product manager for Cable Wholesale in Livermore, Calif.

Where a wholesale upgrade is not in the cards (or budget), Cable Wholesale has introduced a new line of high-speed 12- and 24-fiber MTP cables. These can bridge legacy 1Gbps/10Gbps networks over to 40Gbps/100Gbps networks, and act as the trunk line on a network backbone, which translates into fewer requirements, less labour and installation costs, and improved air flow.

“The wonderful thing is that you don’t have to go through an overhaul of your legacy equipment to enable much more data flow,” Macias notes. “You can do your equipment upgrades piece by piece by condensing one section at a time.”

Keeping your eyes on the prize(s): Whether talking servers, networks or mechanical systems or infrastructure management, the real buzz is around all-knowing/all-seeing data centre infrastructure management tools (DCIM) that integrate multiple functions.

“Virtualization of the physical network is causing a lot of complexity and change,” says David Leith, technical product manager, for uptime software inc. in Toronto. “Physical wiring matters less and less in these new networks, because you can change things on the fly. This is putting a lot of pressure on maintaining visibility into all changes.”

Increasingly, data centres are moving beyond point tools to single consolidated monitoring tools for all aspects of the facility, from networks and servers to databases and applications, he adds. “It really has to be a complete picture. Monitoring is pretty difficult if you can’t cover off all the silos.”

Even shifting power loads are part of the monitoring equation, Oreskovic says. “There used to be a time when applications resided upon the server and support was in the form of multiple and redundant power systems and cooling. Racks and zones couldn’t go down. In a virtualized environment, you don’t even know where the load is. With processors moving around, so do the power requirements and heat generation.”

Eaton’s Energy Advantage Architecture (EEA) has the ability to dynamically change to maximize the efficiency of the load demand. Users can program UPS profiles remotely, depending upon load requirements, Oreskovic explains. “It’s like a hybrid car that provides battery power when you need energy savings, but you step on the gas you get raw horsepower. With clean power and good conditions it will operate at 95% efficiency.”

Whether looking at servers, networks or heating/cooling systems, Oreskovic says the key for today’s data centre lies in tackling more than one solution. “There are a lot of little things going on right now and a lot of different ways to skin the cat. No one solution can meet the needs of multiple data centres, so every team needs to do their own assessment and come up with the best solutions for them…or put in systems that are flexible enough to accommodate as things change.”

Denise Deveau is a Toronto-based freelance writer. She can be reached at denise@denised.com.