Connections +
Feature

The 40 Gig Speed Train

Whatever shape the transition to 100 GbE takes; it appears that 40 GbE can fit the bandwidth bill quite nicely.


March 1, 2012  


Print this page

In an industry where bandwidth tends to increase by multiples of 10, the leap to 100 GbE services is too big for most infrastructures and budgets.

Hence the rise in interest in 40 Gigabit Ethernet, a technology that most experts say will have a good run for at least five years if not longer.

“This is going to be the year of 10 GbE in server connectivity, which will drive the need for higher performance backbones,” says W. Paul Hooper, vice president of marketing for Gigamon in Milpitas, Calif. “A 10 GbE backbone simply doesn’t work in those cases.”

A major contributing factor is Intel Corp.’s announcement that its new Romley 10 GbE server platform would start shipping later in the first quarter of 2012, confirms Yinglin (Frank) Yang, technical marketing manager, enterprise, for CommScope, Inc. in Richardson, Tex. “This will definitely help drive 40 GbE adoption.”

The interest in 40 GbE is definitely being driven by the change on the server side, says Philippe Michelet, director of product management for the data centre product line, HP in Cupertino, Calif. HP for one has just announced eight new generation servers leveraging the Intel chipset that dramatically reduces the price of 10 GbE on the server.

“As servers go to10 GbE, you need the ability to aggregate them,” Hooper notes. “40 GbE is the natural way to add the bandwidth to interconnect them and bring traffic to the aggregation layer.”

A recent report from Brocade Communications Systems Inc. entitled

40 Gigabit and 100 Gigabit Ethernet Are Here!

notes that 40 Gigabit Ethernet was initially targeted for short-reach data center core and aggregation layers or top-of-rack (ToR) server aggregation with copper cable or multi-mode fiber (MMF) up to 125m. It notes “40 Gigabit Ethernet is considered the next logical speed for blade server access and server network interface cards (NICs). As processor performance continues to follow Moore’s Law and double about every 24 months, it is expected that servers will require 40 Gigabit Ethernet interfaces in the next few years.”

The 40/100 debate: The debate over 40 versus 100 timelines can vary, but one thing is consistent: cost and energy consumption are playing a big part in the thought process.

When assessing whether to go 40 or 100, Hooper says the 100 GbE option is simply not viable today for most operations because of the heat envelope, cost and lack of technology availability. “On the other hand 40 GbE offers some major advantages.”

Those advantages include a cheaper price point than 100 GbE and the fact that it can run across a reasonably standard infrastructure, he adds. “But it does demand more granular visibility for traffic traversing the network.”

“100 GbE is expensive,” notes Zeus Kerravala, principal analyst with ZK Research in Ashburnham, Mass. “At minimum you’re paying $100,000 per port. Plus the optics add an additional $75,000 per port. Lastly, because of the actual size of the optics, (about the size of an iPhone), the port density is limited to two ports per card. Right now 40 GbE is half of that pricing and the optics are much cheaper, making it a more affordable, scalable technology.”

Power consumption is also an issue, Keravalla adds. “To run 40 GbE requires an awful lot of fiber; and your switches will run hot. You have to make sure you have the proper cooling and switches. And all of that amounts to more power consumption.”

The power consumption issue does indeed loom large in the 100 GbE world, Michelet says. “When you look at the power and density of a 100 GbE interface it’s 30 watts per port. This makes for a very bulky interface so you’re limited to how many ports you can put on one single blade. It will take another two years before the cost and power will achieve an acceptable price point per port.”

The connection factor: Where the real interest in 40 GbE lies at this point is in connections between network switches and devices, Kerravala explains. “Connections between equipment should be higher. If you have one server with 10 GbE connections and switches, you will saturate that line. The backbone between the edge and the core of almost every fabric platform in these cases needs to be 40 GbE.”

Doug Lindner, director of systems engineering for Juniper Networks Inc. in Toronto, says supporting high density 10 GbE solutions is a must in any new data centre design. “40 GbE is still more an aggregation than an access technology in today’s data centre designs where these access requirements are being driven by the evolution of server and storage technology. While 10 GbE is being more widely deployed today for server access, 40 GbE is considered to be the next level server access technology. But what’s limiting server access today is not the network, but rather the interconnection systems of the servers.”

Lindner notes most data centre architectures deployed in the last 10 years are hierarchical, multi-layered switch infrastructures. “These networks have been built with many branches and paths and are rooted at the top so traffic must often travel up to the top of the network and back down to reach all parts of the data centre. This approach to data centre design is very inefficient and now limits the performance in large-scale computing environments by unnecessarily increasing latency and creating jitter.”

Juniper’s QFabric architecture, for example, collapses these network layers into a single layer for high speed communications using 40 GbE as a transport mechanism between the interconnect chassis and the access nodes reducing latency to 5uS and virtually eliminating all jitter.

“With more and more east/west traffic between servers, you don’t want to send that traffic to the core,” Michelet notes. The HP IRF (intelligent resilient framework) technology for example, allows organizations to logically configure four as one physical switch, which allows them to switch traffic server-to-server without going to the core.

The cabling conundrum: Mike Valladao, Gigamon’s product line manager, contends that where things stand today 40 GbE is a relatively straightforward installation task from the cabling perspective. “In many cases 40 GbE makes the most sense, because if you look at it, it’s the same as bringing 4 X 10 GbE pipes together into a single link. Thus a channeled link quadruples its throughput.”

Lindner says some are moving towards direct access cable (DAC) solutions to save cost and improve efficiency. “By directly connecting multiple switches over copper instead of using SFP (small form-factor pluggable) and SFP+ modules at each end, it halves the cost of interfacing multiple switches to servers in a data center environment.”

Cable efficiency is especially critical in a multi-fiber setting because of the sheer density of the devices he adds.

“In a seven-foot rack there can be thousands of ports. You can imagine the kind of cabling mass that starts to develop. Cable management systems for those kinds of environments are increasingly important. But, interestingly, as the network equipment moves up in terms of port bandwidth capacity, there can be a reduction in cable density since each cable can carry more traffic. This increased cabling efficiency can reduce the physical cable bulk by a factor of two or three and reduces your cost per terminated bit.”

In this new architectural scenario, efficient monitoring capabilities are equally critical, Valladao says. “There are no products today that can monitor at 40 GbE rates. Most out there can monitor six to 15 GbE total throughput.”

Gigamon for its part is incorporating 40 GbE connectivity into its H Series traffic visibility platform design to provide intelligent visibility for the
tools and systems that manage, monitor and record traffic across enterprise and server provider networks.

“Without the right monitoring capabilities, your backbone will choke and die in a 40 GbE world,” Hooper says. “It’s about making data analysis tools much more efficient and only giving the information they need to and not have networks overwhelmed.”

Banking on future 100 GbE prospects: Opinions of the lifespan of a 40Gb investment before 100 GbE comes of age vary from three to seven years. As organizations wait for 100 GbE to hit mainstream, Kerravala recommends organizations pay close attention to protecting their investments. “Anyone putting in a switch today that is using 40 GbE should ensure that 100 GbE line cards will work in it without having do chassis upgrades. Make sure you know what you’re doing when buying the stuff.”

The good news is the industry has been proactive in supporting a roadmap and has been careful about standards development in both the 40 and 100 GbE evolution, Lindner explains. “These standards have evolved together and were ratified within a very short period of time of each other. That’s an advantage for the evolution of the optical transmission systems because now vendors are able to create technologies that can meet both optic requirements.”

That’s how the IEEE planned it, says Greg Hankins, global solutions architect for Brocade in San Jose, Calif. “It predicted the future in some ways. But 100 GbE as a server interface won’t make sense for many, many years.”

With the standards designed to facilitate transitions to higher speeds, Hankins says the evolution will not put too much pressure on technicians because a lot has been focused on backward compatibility. “From frame format to cabling to optics, the industry is keeping as much in common as possible. Nothing changes from a configuration perspective, because it’s standard Ethernet. Since the fiber used for 100GBASE-SR10 does have more strands however, they will have to figure out different terminating schemes for MPO cables.”

Those with a 100 GbE growth path for their network will need to learn how to build that out across existing network systems, cautions Lindner.

“They have to consider how the optical transport systems will support 100 GbE in the future, and how this will impact the fiber plant as well as the interconnected network systems.

Whatever shape the transition to 100 GbE takes, ultimately the driving force behind any timing decisions is economics. In the meantime, it appears that 40 GbE can fit the bandwidth bill quite nicely.

CNS

Denise Deveau is a Toronto-based freelance writer. She can be reached at denise@denised.com.