Connections +

Maintenance & Testing – Supporting Gigabit Ethernet

Certifying an installed fiber optic link to support Gigabit Ethernet presents unique measurement challenges that can be overcome with the correct tools and procedures.

May 1, 2001  

Print this page

The Ethernet standard has been the most successful networking platform in history. In a relatively short period of time, network connections have evolved from conventional shared or switched 10-Megabit Ethernet connections to switched Fast Ethernet, providing 100 Megabit bandwidth links to individual users.

Today, business and technical applications are advancing to include high-resolution graphics, video and audio that exceed even the high capabilities of Fast Ethernet. The result is a rapid movement towards Gigabit Ethernet, delivering speeds of 1000 Megabits per second (Mbps) while using the same Ethernet frame format and media access control technology as all other 802.3 Ethernet technologies.


Gigabit Ethernet can be expected to quickly proliferate within corporate networks due to its outstanding performance, low cost and relatively easy implementation. According to a study by research and consulting firm Dell’Oro Group, Redwood City, CA, Gigabit Ethernet switches cost less than US$2 per Mbps, a fraction of the cost of FDDI and ATM, and less than half the unit-cost of 100 Mbps Ethernet. Switches on the market today perform at full wire speed of 1.0 billion bits per second with 64-byte frames.

The fact that Gigabit Ethernet network topology follows the traditional rules of Ethernet helps leverage the installed base of Ethernet equipment. Network managers are able to utilize the skill sets and training of their existing network administration staff and use their familiar network analysis and management tools. Deployment and administration are also simplified by the fact that there is no need to add other protocols or technologies.

Yet, implementation of Gigabit Ethernet is far from a simple process. One of the most complex issues is selecting and certifying cabling capable of meeting the demands of 1000 Mbps transmission. The IEEE 802.3z Gigabit Ethernet standard includes five physical layer specifications — three for fiber optic media and two for shielded copper media:

The 1000BASE-SX defines a standard for use with 850 nm lasers over multimode or single mode fiber optic cabling.

The 1000BASE-LX is a standard used with 1350 nm lasers that is based on the Fiber Channel signalling specification over multimode or single mode fiber optic cabling.

1000BASE-LH is a long-haul specification designed for use on metropolitan area networks.

1000BASE-CX is intended for short haul copper connections of 25 metres or less within wiring closets.

1000BASE-T permits Gigabit Ethernet over copper twisted pair cabling.


The majority of larger installations are moving to fiber optic cabling because it offers major advantages in Gigabit Ethernet applications. Research performed by The Tolly Group of Manasquan, NJ shows that the longer link lengths possible with fiber-optic cabling, combined with the recent reductions in component cost, have reduced the cost of fiber LANs from 15 percent to 22 per cent below copper in typical applications. The key to these reductions is the move from the distributed network design required by the 100-metre limit of unshielded twisted pair (UTP) copper cable to a centralized network design that provides dramatic reductions in the number and size of telecommunications rooms. Another major factor is the recent reduction in the cost of fiber networking components. For example, the price of channel jacks for the latest generation of fiber is now below those for Category 5e and Category 6 UTP.

There are two different types of fiber optic links, and the type used will have an impact on the testing process. With single mode fiber, the radius of the fiber core is in the order of 5 and 10 microns, close to the wavelength, so that only one single light angle or mode of light can pass through the fiber. The fact that the light is limited to a single transmission path means that an optical signal can travel for long distances with relatively low losses.

Multimode fiber, on the other hand, has a much larger internal radius. This means that light propagates along multiple paths, each of which has a slightly different length, and thus causes distortions that limit the distance over which the integrity of the light can be maintained. Because it is much less expensive than single mode fiber, multimode fiber is the predominant type of fiber used in local area networks (LANs).


While one of the key advantages of Gigabit Ethernet is that it builds on existing technologies, one of its hazards is the fact that traditional testing and certification methods will not necessarily hold up at Gigabit speeds. It is important to note that the loss budgets, the allowable losses through the fiber cable plant, are much tighter with Gigabit Ethernet than with 10 or 100 Mbps. In the past, most fiber cable used in LANs could pass certification with loss budgets as high as 5 or 10 dB. In both 1000BASE-SX and 1000BASE-LX, the actual link loss budget is 7.5 dB, but this must account for the effects of both attenuation and dispersion. With dispersion factored out, loss budgets for Gigabit Ethernet are as low as 2.35 dB. With a typical loss of 0.5 to 0.75 dB per connector pair, many installations can be close to the limit even if they are carried out perfectly; they require highly accurate testing methods in order to avoid failing certification, even when they are within the acceptable range.

Many network engineers are still using LED sources when measuring losses in cabling installations intended for use with Gigabit Ethernet. At 10 or 100 Mbps, LED sources almost always provide perfectly acceptable results, which helps to explain why much of the existing test equipment base still uses LED sources. Yet, there are some significant problems with LED testers when they are used on cabling intended to run Gigabit networks.

An LED transmits a wide and diffuse array of light energy that fills a multimode fiber and has far higher modes than any laser. These higher order modes are more susceptible to bending loss. In addition, inevitable misalignment in the fiber-to-fiber connection will prevent the receiving fiber from capturing all of the light energy. The LED measurement will consequently exhibit much greater loss, due to both connector loss and bending loss, than will be seen in the actual laser transmission.


Lasers provide a more accurate alternative for testing Gigabit Ethernet cable plants. As they launch light in more powerful and concentrated beams that have fewer higher order modes than LED beams, lasers are much less susceptible to bending loss. In addition, a concentrated beam of laser light is much less sensitive to misalignment between fibers. As a result, the laser test will generally exhibit much lower and more accurate loss measurements, particularly on multimode fiber.

The choice between using single mode or multimode fiber is an important one when it comes to loss testing. The higher order modes (those that travel near the exterior of the fiber core) are more susceptible to loss due to bending of the fiber than lower order nodes (those travelling near the centre of the fiber core). Because a laser concentrates the light energy near the centre of the fiber, only the lower order modes are excited in a multimode fiber. However, an LED yields what is called an “overfilled launch”, because it completely fills the fiber and excites all modes, both lower and higher order. This is another reason why testing a multimode fiber with an LED light source is likely to yield unrealistically high loss values.


The first step in cable testing is to turn on the source and select the wavelength at which loss is to be measured. Next, to provide a reference power level for loss measurements, measure the power of the source directly at the meter. The reference power must be high enough to register at the meter with all of the cable loss subtracted.

There are two basic methods used to measure loss — single-ended loss and double-ended loss. Single-ended loss is m
easured by mating the cable you want to test to the reference launch cable and measuring the power out on the far end of the cable. This method measures the loss in the connector mated to the cable, as well as losses in the cable that may be caused by splices, fiber and other connectors. Double-end loss is measured by attaching the cable to be tested between two reference cables — one attached to the source and the other to the meter. This method differs from the first in that it measures the loss of two connectors.

The disadvantage of the first method is that it understates the power loss in the link, since it includes just one connector. This is a major disadvantage in premises applications, as fiber lengths are usually short and the majority of loss is found in the connections at either end. The stringent power loss requirements of Gigabit Ethernet usually necessitate that the double-ended method be used.

However, the double-ended method also has its disadvantages. Firstly, when going from the reference set-up to the test set-up, it is necessary to disconnect one end of a patch cord from the tester. It is easy to mistakenly disconnect a patch cord from the source instead of from the detector, which will lose the reference and seriously compromise the test results. Secondly, extreme care is required in disconnecting a patch cord from the detector end of the tester, as dirt and other elements can damage the detector.

A simple adaptation to the double-ended method retains its accuracy, while avoiding its disadvantages. This test procedure includes the addition of a short test jumper with a connector to ensure that the test results will be the same as the results obtained with the second methods. As with the original approach, the results contain the loss of the fiber cable plus the two connectors, while the two patch cords and associated connectors are referenced out. This new approach provides the same high levels of accuracy as the second method described above, while making it unnecessary to disconnect the patch cords from the test equipment. This approach reduces the possibility of errors caused by reinsertion of patch cords or by contamination or damage of test equipment fiber interfaces.

By using the proper equipment and methods, cabling professionals and network owners can ensure that the fiber plant will meet the requirements of Gigabit Ethernet.CS

Brad Masterson is Product Manager – Networks Division at Fluke Electronics Canada, LP in Mississauga, ON. A specialist in electronic test and measurement in the high-tech and industrial market, Mr. Masterson has close to 20 years of experience in the field. He is a Certified Electronic Engineering Technologist registered with OACETT and is currently a member of BICSI.

Print this page