Connections +
Feature

Pushing the limits of fiber

An intriguing technology called EDC is set to increase the capability of legacy optical fiber cabling to support 10 Gigabit data transmission.


September 1, 2005  


Print this page

There is a lot of discussion in the industry about 10 Gigabit Ethernet over copper and many people are under the impression that 10 Gigabit over fiber is a done deal.

Well, I was surprised and intrigued to learn that there is a parallel development in the industry and an intense effort by some of the silicon vendors, including big names such as Intel Corp. and Agilent Technologies Inc., to develop a new technology for 10 Gigabit over installed-base “FDDI grade” fiber.

The development is taking place in the IEEE 802.3aq task force and will be part of a new physical layer standard called 10GBASE-LRM, scheduled to be released next year.

I was attending a presentation at the BICSI Fall Conference in Nashville by Andrew Oliviero of OFS when I first learned of the new technology called EDC (Electronic Dispersion Compensation), which will be integrated into the emerging 10GBASE-LRM standard.

What intrigued me was that advances in digital signal processing that are extending the limits of copper today will also have a big role to play in extending the limits of fiber. I was naturally curious to learn more about this technology in enabling new applications for fiber.

Optical transmission over multi-mode fiber is inherently limited by a parameter called modal dispersion. If a light pulse is sent over the fiber, the different modes (or paths of light through the core of the fiber) arrive at different times at the far end of the fiber thus causing a broadening of the pulse.

At a serial transmission rate of 10 Gb/s the bit time interval is very small, about 100 picoseconds (or 100 x 10-12 seconds).

For such a small bit time interval, it is not unusual for the pulse to broaden into an adjacent time slot that is allocated to a consecutive symbol.

This pulse broadening affects the ability to recover the signal because of Intersymbol Interference (ISI).

Electronic filtering techniques are widely used in lower speed systems such as disk drives, wired communication systems such as DSL and wireless applications to reduce ISI.

The filter technology when applied to optical dispersion is known as EDC.

How it works is explained in an Intel white paper entitled “10 Gb/s Optical Transceivers: Fundamentals and Emerging Technologies.” Essentially, the filter consists of a combination of feed forward equalizer (FFE) taps providing linear filtering and feed back equalizer (FBE) taps dealing with the non-linear portion of the pulse reshaping.

Adaptive algorithms are used to automatically align the filter coefficients to minimize the error relative to an ideal signal.

What does this mean in layman’s language? The filter processes the signal to remove that portion of the interfering signal that spills over into an adjacent time slot. The net effect is an improvement of ISI at the expense of some reduction in signal power.

10GBASE-LRM is being introduced as an alternative to 10GBASE-LX4, which uses four lasers to divide the 10 Gb/s signal onto four parallel signals on four different wavelengths of light.

The transmission rate for each wavelength is 2.5 Gb/s, and the bit interval time is correspondingly increased by a factor of 2.5 times, thus reducing the problem of intersymbol interference.

The drawback of this solution is that it is relatively expensive (although this is debatable since 10GBASE-LX4 transceivers are coming down in price) and it is not scalable to smaller form factors (notably XFP).

The original objective of 10GBASE-LRM was to support a minimum distance of 220 meters over conventional grade 62.5 and 50 micron multi-mode fiber, but more recently customer feedback prompted a change of this objective to 300 meters.

In comparison a 10GBASE-SR solution, which is designed for 300 meters over laser optimized 50 micron multimode fiber, can only support a distance of 26 meters over “FDDI grade” fiber.

EDC is an intriguing technology that will increase the capability of legacy optical fiber cabling to support 10 Gigabit data transmission. Since MMF is the largest installed fiber base in the vertical riser, and while it works fine for GbE transmission, it is severely limited for 10 GbE transmission because of the problem of modal dispersion.

10GBASE-LRM may be an alternative to consider compared to pulling in new fiber or compared to using a WDM-based approach. For new installations, the price/performance benefit of 10GBASE-SR optical modules using VCSEL lasers is hard to beat. The price premium for 10GBASE-LR modules are 50% more than their VCSEL counterparts.

A 10GBASE-SR solution using laser optimized optical fiber is the most cost effective approach. It also provides the capability of extended reach higher speed applications in the future using EDC enabled transceivers.

Paul Kish is Director, Systems & Standards at Belden CDT. He is also vice chair of the TR-42 engineering committee.

Disclaimer: The information presented is the author’s view and is not official TIA correspondence.