Connections +
Feature

Cabling & the Data Centre

Increased demands require more and more equipment to be crammed in every year. By necessity, the structured cabling side of the equation is becoming more and more of an exact science.


January 1, 2007  


Print this page

There are signs of change everywhere in the way data centres are built and refitted, signaling a greater emphasis on planning and on simplifying the installation process.

As an example, when it comes to data centre planning itself, the installation of cable has often been treated almost as an afterthought. However, by paying more attention to it early in the planning process it is possible to do it much more efficiently and thus reduce installation costs.

Peter Sharp, senior telecommunications consultant at Giffels Associates Ltd. in Toronto and a member of the CNS Editorial Advisory Board, points out that it is frequently installed after everything else is already in place and with time at a premium, cable installers rush to pull and terminate cables to all of the equipment.

He had done it that way many times himself, but when a customer told Giffels it simply could not afford the projected cost of cabling a new data centre and offered the firm a significant fee for finding a way to reduce the cabling bill, he sat down and thought of a better way.

Sharp came up with the following proposal: Plan the cabling more carefully so it could be shipped to the site in bundles already cut to the necessary lengths with connectors already attached, and put it in place before cabinets are installed so all that remained to be done once the hardware was in place was to connect the cables.

He had the 24-strand bundles of fiber colour-coded by including single strands of fiber with different-coloured jackets in every bundle, to identify which bundle went where. The data centre had a pathway system so workers could simply unroll the bundles of cable and lay them in place, and the only work a skilled installer had to do on site was the final connections.

No last-minute panic

This meant much of the tricky work could be done weeks ahead of time, rather than in a last-minute rush, and “back at the shop” rather than on site. Sharp says it made for a smoother and cheaper installation.

In early 2003, the Ontario government set up the Smart Systems for Health Agency to provide data centre services to the province’s hospitals. When the agency began building its data centre, its approach was similar to Sharp’s. All the initial cable runs were done first before anything else went into the data centre, says Linda Weaver, SSHA’s chief technology officer.

Similarly, when SSHA starts a new row of racks in its two steadily expanding data centres, it runs the cabling for the whole row right away, even though not all the racks will be installed and populated at once.

“It’s a lot easier for people who are actually running the cable not having to move around other things,” Weaver says.

Few organizations take it that far, but Mike Barnick, senior manager of solutions marketing at Richardson, Tex.-based cable manufacturer Systimax Solutions, says his company is providing more bundles of pre-terminated cable to customers eager to simplify their installations.

This trend started with harder-to-terminate fiber but is showing up today in copper as well, observes Benoit Chevarie, Montreal-based product line manager at Belden CDT Inc. of St. Louis.

“Definitely we see trend toward pre-terminated solutions,” he says. “Installers want to come with pre-made bundles of cable …. They want to save time on installation.”

Helping promote a shift from the old ad-hoc model of cabling in which installers simply ran cable wherever needed once everything else was in place, to a more structured approach, last April the Telecommunications Industry Association (TIA) published a standard for the design and installation of data centres. Known as TIA-942, the new standard addresses a range of issues from the layout of the facility to fire protection to cabling capacity and network design.

TIA-942 viewed as critical

John Struhar, marketing manager at Ortronics/Legrand, a New London, Conn.-based supplier of cabling and related products, says TIA-942 addresses specific data centre requirements not covered by the existing TIA-568 commercial building wiring standard. It addresses the structure of data centres specifically, dividing them into six spaces: the entrance room, telecommunications room, main distribution area, horizontal distribution area, zone distribution area and equipment distribution area. It recommends Category 6 cabling where twisted-pair is used and 50-micron laser-optimized multimode fiber for optical cabling.

Struhar says there is a need to establish clear standards for cabling data centres. “As we visit data centres,” he says, “we see very different ways of connecting them.” Often equipment vendors send in their own installers who also do the cabling to their equipment, sometimes in questionable ways.

Struhar says it will take time for TIA-942 to have an impact on data-centre cabling practices, but in time he expects it will become as influential as TIA-568 is outside the data centre today.

Ron Ethier, vice president of network and managed services at Primus Telecommunications Canada Inc., which provides hosting services to a variety of customers through its Internet data centres in Ottawa, Toronto and Vancouver, says his company is already doing much of what the TIA-942 standard covers, but he sees value in a comprehensive standard addressing areas such as cable management, floor size and heating, ventilation and air conditioning design.

“I think it brings some structure to data centre cabling,” says Chevarie, “which was not necessarily the case before.” In some areas TIA-942 simply codifies what were emerging as best practices already, he says, but it is helpful to have everything written down in one place with references to other documents as appropriate.

Among the recommendations of TIA-942 is the use of Category 6 cabling. In reality many data centres are still using Category 5e or even older cable, though Chevarie says most new installations his firm sees now use Cat 6.

One thing helping shift people’s thinking toward newer cabling specs is the completion of standards to allow 10-gigabit transmission over copper wiring.

Combining a need for high speeds with connections over fairly short distances, the data centre is the most natural environment for 10-gigabit connections over copper cabling. So proponents of cabling and components that meet the recently completed 10Gbase-T standard and the emerging Augmented Category 6 cable standard see it as the first obvious market for their wares.

“Primarily what are driving it are data centres,” says John Schmidt, senior product manager for structured cabling at ADC Telecommunications Inc. of Minneapolis, one of the pioneering manufacturers of 10-gigabit copper cable. “They will be the first people that begin to deploy 10Gbase-T.”

George Zimmerman, founder and chief technical officer of Irvine, Calif.-based Solarflare Communications, Inc., agrees, saying most of the demand for 10Gbase-T for the next couple of years will be in the data centre.

For existing data centre cabling, 10-gigabit mostly means fiber, because the 10GBASE-T standard was only ratified last spring, and while several vendors offered pre-standard cable capable of carrying 10 gigabits, some purchasers preferred to go with the more mature fiber option.

Also, while the cable has been available, electronics to drive 10 gigabits over that copper cable have not been, so most purchases of high-speed cable so far have been made with future-proofing in mind rather than with the idea of immediately using the cable to its full potential.

“Fiber is already here,” Barnick says “Switches are here for 10G. So quite obviously the data centre needs that today.” The popular solution for 10-gigabit connections in the data centre up to now has been 50-micron multimode fiber, Barnick says.

Mitch Kahn, vice president of marketing for transport products at Applied Micro Circuits Corp., a Sunnyvale, Calif. company that sells physical layer silicon for multimode
fiber, says the 10GBASE-SR standard, which provides a reach of up to about 80 meters using older multimode fiber and as much as 300 meters using newer, higher-quality fiber, is widely used in data centres where high-speed links are required. Today, he says, 10GBASE-T is too expensive and requires too much power.

AMCC is also supporting 10GBASE-LRM, a standard that allows 10-gigabit transmission at distances to 220 meters over older 62.5-micron multimode fiber originally designed to support 100-megabit-per-second transmission.

However, Barnick adds, once switching equipment for high-speed copper is available at roughly comparable costs, there will be a tendency to move toward the copper infrastructure.

And copper can serve most high-speed needs in the data centre, Zimmerman says. “Honestly, there aren’t many places other than the long 300-meter kind of vertical links that we see the fiber surpassing the copper.”

The ideal twisted-pair cable for high speeds is Augmented Category 6, an almost-completed standard designed to allow 10-gigabit speeds at distances up to 100 meters.

Installation challenges

The standard is essentially complete, but awaiting formal ratification. Meanwhile, some data centres have installed pre-standard cable designed to meet the Cat 6a specifications, while in others believe basic Category 6 or even Category 5e is good enough for the distances they require.

While 10GBASE-T is still finding its feet, Kahn adds, the 10GBASE-CX4 standard for high-speed short-haul transmission over multiple pairs of coaxial copper cable is a little more established and some data centres are using it where its 15-meter distance limitation isn’t a problem. The cable is a bit cheaper than fiber, he says, though it is thick and difficult to work with.

10GBASE-T and Category 6a cabling also present some installation challenges. Since alien crosstalk – interference between cables running side by side – is a problem at higher frequencies, cable manufacturers have made high-speed copper cable thicker, building in air space or added insulation to reduce crosstalk. That makes the cable bulkier and means less of it fits in a given space.

The higher-speed environment is also less forgiving of any slips in cable installation. Here’s where 10-gigabit transmission over copper ties in with the growing interest in pre-bundled, pre-terminated cable. Sharp says doing more of the skilled work in advance in the shop rather than on site under time pressure makes more and more sense as the margin for error is reduced. On the other hand, he notes, the alien crosstalk issue may make bundling cables problematic.

All this assumes, though, that the designers of data centres even see a need for 10-gigabit speeds, and by no means all of them do.

“For the most part the typical customer is still using 100-megabit uplinks,” says Ethier. Primus is seeing no demand from its customers for 10-Gigabit Ethernet yet, so although the company will install fiber runs where necessary, Ethier says it relies largely on copper cabling – and not even the latest standard in copper.

Primus opened a new, $3-million data centre in Vancouver in late September. The facility has many state-of-the-art features, including seismic platforms under the server cabinets to help keep the equipment running smoothly through an earthquake. But as far as cabling is concerned, it’s Category 5e throughout.

That is ample for the 100-megabit connections most customers require, and sufficient for the few one-gigabit requirements.

Primus’s principal business, of course, is hosting Web sites and Internet applications, which limits the throughput its customers need in the data centre itself.

Smart Systems for Health does see a need for 10-gigabit speeds, and even for 100-gigabit using Fiber Channel. However, copper cabling is not playing a significant role in providing that sort of throughput. The SSHA data centres use fiber almost throughout, Weaver says, with some Category 5e cabling for “typically what would be characterized as the last mile” — though in the case of a data centre it’s more like the last few meters.

Increasing demands on data centres are not only requiring increased bandwidth, but calling for more equipment to be crammed in every year to meet escalating demands for processing power and data storage. That, in turn, puts pressure on power and cooling systems, which makes it imperative to plan data centres carefully, right down to the level of routing cable so as to minimize interference with the flow of cool air. By necessity, cabling the data centre is becoming a more and more exact science.

Grant Buckler is a Kingston, Ont. freelance writer who specializes in IT and telecommunications issues. He can be reached at gbuckler@cogeco.ca.

IT’s ‘insatiable appetite for power’ called into question by Gartner

The growth of power-hungry data centres, coupled with the rising cost of electricity, is focusing attention on energy, providing businesses with a double incentive to cut carbon emissions, Gartner Inc. says.

The research firm says organizations are under mounting pressure to develop ‘greener’ approaches towards their information technology (IT) practices, and IT and business leaders need to wake up to the issues of spiraling energy consumption and environmental legislation.

Although technology can help reduce the impact of some environmental problems, its potential harmful effect is receiving increasing attention from environmentalists and policy makers alike.

Gartner says two factors are particularly visible to policymakers; the direct issue of electronic waste and the potential impact — caused by the electricity that computers consume — on global warming.

“IT’s age of innocence is nearing an end,” said Steve Prentice, distinguished analyst and chief of research at Gartner.

“Technology’s clean and friendly ‘weightless economy’ image is being challenged by its growing environmental footprint. While a growing number of regulations are already increasing the end-of-life costs for IT equipment, IT also has to face mounting concerns over spiraling electrical power consumption.”

According to Rakesh Kumar, research vice president at Gartner, during the past 12 months, there has been a significant increase in the deployment of high-density servers which is leading to significant problems in power and cooling for data centres.

“The power needed for a rack of high-density server blades can be between 10 and 15 times higher than the power needed for a traditional server environment,” he said.

“Most legacy data centres built 15 to 20 years ago cannot meet this demand. At the same time, a similar amount of additional power will be needed to remove the huge quantity of heat generated by these new machines. If the machines are not cooled sufficiently, they will shut down, with potentially damaging consequences for business service levels and IT governance.”

Gartner said the electricity power needed for data centres is not the only issue. Power is also needed for technologies such as storage devices, networking controllers, uninterrupted power supplies and air conditioning.

A realistic total figure for data centre power consumption is therefore at least double that used on servers alone, Kumar warned.

He added that that overspending on power can have a considerable impact on the IT department’s ability to grow and meet business needs in the future.

“Today, energy costs typically form less than 10% of an overall IT budget. However, this could rise to more than 50% in the next few years. The bottom line is that the cost of power on this scale would be difficult to manage simply as a budget increase and most CIOs would struggle to justify the situation to company board members.”

Furthermore, Kumar advised that legislation is imminent in both North America and the European Union that will penalize organizations with large data centres that
do not put measures in place to better manage waste energy.