Connections +
Feature

Clearing out 100 GbE Roadblocks

Executive decisions are being made around whether to transition to 40 GbE or leap frog to 100. Given that standards are being developed in parallel, the decision comes down to timing and price differentials.


March 1, 2013  


Print this page

he path to 100 Gigabit Ethernet (GbE) may still be littered with roadblocks, but their numbers are fewer with each passing year. High cost and limited technology availability continue to keep wholesale migration at bay. At the same time, movement on the standards front is doing its part in bringing the price down.

According to Dhritiman Dasgupta, senior director of product marketing for Juniper Networks in Sunnyvale, Calif., the time is coming soon where 100 GbE will play a key part in meeting the accelerating demand for speed.

“Networks have been bombarded over the last five years and are reeling to catch up with the changes we’re seeing in enterprise and data centre ecosystems,” he explains. “We’ve moved from mainframe to client-server to distributed architectures. Servers have changed completely from a standalone model with all the virtualization work being done. And storage is a lot more distributed and shared across different systems.”

With all the ecosystems working together, the network needs to catch up. Executive decisions are being made around whether to transition to 40 GbE or leap frog to 100. Given that standards are being developed in parallel, the decision comes down to timing and price differentials. In fact, standards bodies have already moved on and talking 400, 800 or 1 Terabyte Ethernet, Dasgupta notes. “It appears that 100 GbE is not a final destination.”

Beyond the major carriers, some adoption on the 100 GbE front is making headway in interconnecting locations for very large scale and/or high transaction volume entities. This can take the form of data centre to data centre, data centre to the cloud, or cloud to cloud.

The next obvious place for potential growth is within the data centre itself. “So many servers are being fired up and so many applications being developed, there’s a need for even higher speeds to enable servers to talk to other servers, storage systems and switches,” Dasgupta says.

Those enterprises whose competitive survival relies on microsecond-level transaction processing (e.g. a stock exchange or genome research) will be the first in line to make the investment in their data centres, albeit at a sizeable premium, he speculates. Following that expect to see the gigantic resource pools such as Amazon, Microsoft or Google or major cloud providers that rely on optimal speeds to keep their competitive advantage, taking the plunge.

By far the largest group, enterprise level data centres, is unlikely to rip and replace networks over the next two years, and will opt to wait out the four- to six-year lifecycle before retiring their current equipment.

The transition; however, has its pluses and minuses, Dasgupta explains. Engineers bringing network devices to market spend 80-90% of their time on the control plane, developing and testing protocols. In enabling components to communicate, very little effort is needed to move up the ladder since security, quality of service, routing and switching protocols would remain the same.

On the down side, 100 GbE requires specific optical interfaces, which brings up the cost of the physical layer aspects considerably. The devices are also incredibly power hungry, further adding to infrastructure expenses.

By way of comparison, 40 GbE optics are two-and-a-half times more expensive than 10 GbE optics, while 100 GbE is many multiples of the price of 40 GbE. The inflexion point will be reached when the cost point is two to three times the previous generation speed, Dasgupta believes.

“That’s when we’ll see mass adoption. It’s definitely where standards are heading. We’ll see people throwing away 40 GbE when it makes sense to go to 100 GbE.”

The interoperability equation:

Another critical component in adoption is interoperability, says John D’Ambrosia, chief Ethernet evangelist for Dell and chairman and one of the original founders of the Ethernet Alliance. “Within the past 18 months I’ve seen amazing demonstrations for 10 GbE, 40 GbE, 100 GbE, and 100GbE over OTN (Optical Transport Network).”

On the IEEE front the focus has been on cost-per-bit reduction, he adds. “Even at 40GbE, front panel capacities are so incredible you need bigger pipes across the backplane.”

The IEEE P802.3bm Task Force is defining the next generation of 40GbE and 100GbE optical offerings which include 40GbE over 40km, development of 4×25 multimode fiber solutions for 20 and 100 metres, and a new single mode fiber objective to address 500 metres.

Another project on the cabling side is the IEEE P802.3bp Reduced Twisted Pair GbE. This initiative has far-reaching implications because it will drive cabling to become more standardized for applications such as automotive, D’Ambrosia says. “What makes this so exciting is that we could be talking 200 to 300 million Ethernet ports by the end of the decade.”

For the time being, it’s 40 GbE that is taking off in data centres, while 100 GbE is almost exclusively in carrier line-type connection applications. “That’s a whole different group. They do need it and they have different economics to consider. In fact they’re already thinking terabyte over 400,” D’Ambrosia says. “When you look at the fact that 40 leverages a lot of 10 GbE technologies, it makes sense. 100 GbE; however, is an entirely different level of maturity.”

In the data centre, connections are the sticking point for 100 GbE, he adds. “You need fat pipes to transport over 100 GbE in the data centre. That requires a different link that modules aren’t able to service at the moment. It’s more in the developmental stages as vendors try to come up with more cost-effective solutions to fit inside the data centre architecture.”

Channel selection:

As it stands today, mainstream applications are currently working with 10 GbE ports, explains Nathan Benton, technical director for North America for CommScope in Milford, Mich. “If an administrator needs higher speed, they bond a second 10 GbE connection to the first, then scale up to 30 and eventually 40. Once they reach that point, they swap out the 4 X 10 for 40 GbE ports. There are already shipments going out for very high end users.”

Achieving 100 GbE today would mean a 10 X 10 lane transmission over single mode fiber, or pushing it out over multimode fiber using parallel optics, Benton explains. “However 4 X 25 is an extremely important concept in migration to high-speed Internet and could make a huge impact on your capability to upgrade and adoption.”

That is because a 100 GbE transmission can use the same four fibers to transmit and receive, he adds. “So the fiber optics you invest in today for 40 GbE would be the same as 100 GbE. You would not have to add or change it. You would just have to pull out the transceiver in the server and replace it.”

Where 100 GbE is top of mind, data centre managers are installing 10 GbE and 40 GbE cabling infrastructures that will accommodate 100 GbE when the time is right. The upgrade will require among other things, extended range transceivers that allow users to migrate while keeping the same distances. “Whatever they design for 10 GbE will support them all the way up to 100 GbE,” Benton notes. “Once the infrastructure is installed it’s permanent. You’re just pulling out cards and plugging new ones in.”

The distance factor:

In addition to operability and architecture, distance is a major part in the migration discussion. Core telecommunication networks today are limited on available fiber. With dramatic increases in bandwidth demand, 40 GbE will be fairly short- lived in those applications, says George Jones, vice president of marketing and business development for Applied Micro Circuits Corporation (AppliedMicro) in Sunnyvale, Calif. “Most new development work is targeting 100 GbE and there are products to support that.”

Within the data centre, by comparison, 40Gb Ethernet is increasingly important. Copper simply cannot handle even a 4x25G signal over more than 10 meters, Jones says. Rather, users are turning to active optical cable to address the issue. In this scenario, the QFSP+ socket on a switch doesn’t change, but an Active Optical Cable can capture power from using a standard QSFP+ connector to transform the electrical signal to an optical domain. The process is reversed at the other end. “This is very important if you’re thinking 40G today with scalability to higher speeds down the road,” Jones says. “A technology that enables both is very important. As it turns out, AppliedMicro has refined a technology for active optical cables and is now in partnership with Volex in the U.K. to develop products.

With the 4 X 25 approach in particular, cable diameters are much more manageable and connector upgrades easier, says Ron Nordin, director of research for Panduit Corp. in Tinley Park, Ill. “With 4 X 25 100 GbE you can use the same form factor and twin axial cable for scaling up before you have to go to bigger connectors. With fiber, making the upgrade path from 40 GbE QSFP-style optical connectors is seamless.”

A logical place for copper within the data centre is connecting servers to switches in top-or-rack or end-of-row architectures to achieve cost savings, he says.

“There simply won’t be a market for copper-based with 100 GbE,” says Jacob Macias, product manager for CableWholesale in Livermore, Calif. “Even with Cat 7 or even 8, I don’t see it anywhere near 100 GbE apps.”

He believes the decision comes down to whether to spend the money today on 40 GbE with plans to expand, or waiting it out until 100 GbE prices drop and then making a direct leap.

“Right now I just think people are looking at the fact that 100 GbE is double the price. That’s not just the cabling and equipment. Those large fiber trunks are around $75,000 and up and ports can run in the tens of thousands of dollars. You also need a lot more power, which means more space, heating and cooling requirements. Everything needs to be a bit grander. It’s not something to be taken lightly.” CNS

 

Denise Deveau is a Toronto-based freelance writer.
She can be reached at denise@denised.com.