Connections +

The need for SPEED

Networking technologies that were once considered promising contenders have been overrun by bigger, better and faster competitors. Which are winning today's race and which will be left in the dust?

September 1, 2000  

Print this page

If one concern dominates the networking arena today, it is bandwidth. Increasing Internet use, the rise of corporate intranets and a growing interest in capacity-hungry multimedia applications all mean networks need bigger and bigger pipes to keep up with the traffic flow. As a result, a great deal of effort is going into evolving networking protocols to handle those bandwidth demands and beefing up the underlying infrastructure.

On the protocol front, faster and faster Ethernet is the order of the day. Fiber Distributed Data Interface (FDDI), once seen as a promising route to faster networks using optical fiber, has largely been eclipsed by 100-megabit-per-second Fast Ethernet and Gigabit Ethernet. And Asynchronous Transfer Mode (ATM), which continues to have a role in wide-area networks, is losing favour as a campus backbone technology.

The rapid growth of Gigabit Ethernet surprised some vendors, says Eric Thompson, principal analyst at the San Jose, California-based research firm Dataquest, a Gartner Group Inc. subsidiary. The Institute of Electrical and Electronics Engineers (IEEE) was quick to ratify the gigabit standard, and some vendors who had assumed it would be slow off the mark scrambled to catch up. Not wanting to be burned the same way again, Thompson says, everyone in the industry has hopped on the bandwagon for Gigabit Ethernet’s successor, 10-Gigabit Ethernet.

The 10-Gigabit Ethernet Alliance, an industry group promoting the faster standard, has a fairly impressive list of charter members that includes Nortel Networks Corp., 3Com Corp., Cisco Systems Inc., Intel Corp., Sun Microsystems Inc., Extreme Networks and World Wide Packets. Virtually all the other significant communications and networking players are also members.

“Five years ago 100 megabits was fast,” says Rich Seifert, president of Networks and Communications Consulting in Los Gatos, CA. “Gigabit Ethernet was ‘whoa, this is faster than anybody needs, but we’ll need it some time.'” Now Gigabit Ethernet is merely fast and 10-Gigabit Ethernet is the technology whose time, while it has not yet come, is within sight.


The IEEE is at work on an international standard for 10-Gigabit Ethernet, called 802.3ae; it is expected to be complete by the spring of 2002. Some vendors will not wait for the final standard, though — Thompson says the first “pre-standard” products can be expected before the end of this year.

As they usually do when new standards are nearing completion, vendors will make educated guesses as to how the final standards will look — not too difficult, considering many of them are represented on the standards bodies — and will tweak their products to fit with the final specs. Carlos Zaidi, who heads strategic marketing for the 10-Gigabit Ethernet program at Nortel Networks Corp. in Brampton, ON, says the first draft of the 802.3ae standard is expected this fall, and such first drafts usually give a good overall picture of how the final standard will look.

Dataquest expects 10-Gigabit Ethernet port sales to approach US$34 million this year and reach just over US$71 million in 2001. However, the real year of 10-Gigabit Ethernet will be 2002, when Dataquest predicts an order-of-magnitude sales increase to US$714 million, in spite of a continued decline in the average price of a port.

Originally, Thompson says, it appeared service providers would be the major customers for 10-Gigabit Ethernet. They are still expected to account for a sizeable chunk of the market. By using a faster version of Ethernet, the carriers will be able to offer their customers longer-haul links that look just like part of the customers’ local-area networks, allowing for an “Ethernet-everywhere” strategy rather than a mixture of protocols. By marrying OC92 data rates with Ethernet compatibility, 10-Gigabit Ethernet will “fundamentally change the economics of wide area networking,” says Zaidi.

Meanwhile, Thompson adds, another market has come to light. It seems that a number of corporate network users have enough one-gigabit network links that they like the thought of 10-Gigabit Ethernet backbones to aggregate those connections. Again, a big advantage will be the homogeneity that comes with extending the familiar Ethernet protocol over longer distances, rather than relying on different technologies.

The growth of 10-Gigabit Ethernet will eat into the use of Synchronous Optical Networks (SONET) over time, Thompson forecasts. Although 10-Gigabit Ethernet will be able to run over SONET to simplify its use on existing links, many new installations are likely to rely on “native 10-Gigabit” running directly on fiber without a SONET layer.


While the pundits believe 10-Gigabit Ethernet will have a place, nobody is talking about running it to the desktop. The standard probably will never support copper wire, except maybe for very short distances within wiring closets, and it is hard to imagine anyone needing 10 gigabits to a client.

“It doesn’t make sense to deliver those sorts of bandwidth to individual humans,” Seifert says — though he notes that he has learned never to say never when making technology predictions. It is just possible that future applications might some day demand 100-gigabit capacity to a single desktop. And in the meantime, Thompson notes that preliminary discussions are already underway on extending Ethernet to terabit and even petabit bandwidth.

Still, gigabit and higher speeds are needed mainly for backbones and server connections today. Ross Chevalier, director of technology at Novell Canada Inc., Markham, ON, says changes in how applications work are altering the demands on networks. In the old client/server model, he explains, traffic between client and server was quite predictable. Today it is less so. Intranets and the Internet mean a lot of graphical information, which users expect to appear on their screens quickly. And while the client may be a powerful personal computer, it could also be something as simple as a cell phone, so servers must carry more of the load. “The networking we’re doing today looks more like the old mainframe networks than anything else,” Chevalier says.

He says part of the solution is policy-based networking, in which bandwidth can be re-allocated dynamically, either according to predetermined schedules that predict changing needs or according to actual traffic patterns on the network.

At the same time, he adds, smart network architects are putting in high-speed network pipes between servers, building massive server farms. Networks need to be built to handle a lot of traffic from many clients back to these servers, and to deliver the response times users expect all the time — whether or not many others are online. What works seems to be a set-up where the pipes get bigger and bigger as you get closer to the server. So while 10-megabit and 100-megabit Ethernet links are still fine for the desktop, these need to be aggregated into gigabit and bigger pipes.


What this means for cabling is that more capacity to the desktop is not a big priority. For that reason, Chevalier for one does not expect a mad rush to the Category 6 standard for twisted pair, which is expected to be formally ratified early in 2001.

This does not mean Category 6 will be a bust. For new installations, it will make sense. Since cabling installation is expensive, it makes sense to put in more than you need now, to avoid having to do it again soon.

Today, though a number of vendors offer products that provide the performance levels specified for Category 6, customers fear the standard is a moving target. “Once (Category 6) does become a standard, most of your consultants will begin truly specifying it,” says John Schmidt, associate product manager for Enteraprise connectivity products at ADC Telecommunications, Inc., in Minneapolis, MN. In fact, sales may take off before the standard is formally ratified — once the specifications appear stable.

Replacing existing cabling is a different matter. Given the cost of rewiring,
Chevalier says, few organizations will replace Category 5 or older installations until they really have to — and they will not have to until they need to run gigabit speeds to the desktop, which will not be any time soon.

The same applies to the fiber to the desktop, which seems destined to remain a pipe dream for a long time to come. “I don’t see any impetus to make it happen,” says Seifert. “It’s just too much of an infrastructure investment,” agrees Nick Tidd, president of 3Com Canada Ltd. in Toronto.


There are some interesting developments in optical networking, though. Among the developments that have done the most for optical networking’s growth is a way of cramming more data into the same physical network, in addition to tricks for simplifying the hardware required to push that data through the fiber.

Probably the single most important development in optical networking since the invention of fiber itself is dense wavelength division multiplexing (DWDM)). By running many wavelengths over one fiber, DWDM lets that fiber carry far more data. Eighty to 100 wavelengths on one fiber is now common in the field, says Kathy Szelag, VP of marketing for the optical networking group at Lucent Technologies Inc., Muray Hill, NJ, and Lucent has demonstrated 1,022 wavelengths in the laboratory.

While it was developed largely to bail out communications carriers whose fiber backbones were running out of capacity, DWDM also permits new applications. For instance, late last year IBM Corp. announced Fiber Saver, an optical connection product for its System/390 mainframe computers. The high-speed channels linking System/390s have traditionally required two optical fibers each, and IBM’s Geographically Dispersed Parallel Sysplex (GDPS) set-up, in which two or more System/390s work together, can take from 100 to 300 channels. The Fiber Saver, however, uses DWDM to multiplex as many as 64 channels over one pair of fibers. This not only cuts costs dramatically, says IBM’s Ernie Swanson, System/390 offering manager for input/output connectivity, but permits faster connections between mainframes.

DWDM’s ability to break one fiber into many channels will also change the relationship between network operators and their customers. To date, carriers have sold their customers services. DWDM will let them simply sell the wavelengths and customers can do with them what they will. This promises network users more control of their bandwidth without having to rely on carriers as intermediaries. It has sparked interest in “dark fiber,” in which network operators simply install fiber and then sell to customers the right to transmit data over it at certain wavelengths.

“Customers can build and own their own terabit networks,” says Bill St. Arnaud, senior director of advanced networks at Canadian Network for the Advancement of Research, Industry and Education (CANARIE) in Ottawa.


Meanwhile, a new generation of optical switches will eliminate one of the bottlenecks that has afflicted fiber networks — the switching equipment. Switching traffic at the optical layer, optical switches significantly reduce network costs and also make the networks more flexible, says Nicholas DeVito, VP of product management and business development at optical switch manufacturer Tellium Inc. in Oceanport, NJ.

The old approach used patch panels, which often had to be physically rewired (for instance, to provision an OC-48 connection for a customer). In the worst case, this could involve changes at multiple locations along the route and take months, DeVito says. With optical switches it is almost instantaneous — and costs are lowered.

Pioneer Consulting, a market research firm in Cambridge, MA, projects the North American optical switching market will expand from about US$427 million this year to some US$10 billion in 2004.

Optical switches are not really all-optical; data still gets converted into electronic form and back. DeVito says this is because management data must be added and changed, and the technology is not yet there to do all of this optically. But the electrical aspect may eventually disappear. Lucent’s WaveStar LambdaRouter, for instance, contains 256 microscopic mirrors that deliver optical signals from fiber to fiber. The question now seems to be when it will make economic sense to use all-optical switches.

If there is any lesson in the past few years of network developments, it is that even technologies that seem like overkill at first become mainstream quite quickly. The 100-megabit Fast Ethernet that once seemed as if it might find a niche on backbone networks is now supported by network interface cards in entry-level notebook computers. The hunger for bandwidth shows no signs of abating.CS

Grant Buckler has written about information technology and telecommunications since 1980. He is now a freelance writer and editor living in Kingston, ON.

Print this page