Connections +

Focus on…Maintenance & Testing: Gaining Knowledge on Return Loss

In the wonderful -- but often complicated -- world of testing, it is critical to understand return loss, what causes it, and the reasons why we must test it.

May 1, 2002  

Print this page

Since the introduction of the EIA/TIA568-A’s Telecommunications System Bulletin 67, cabling standards have not been the same. We have gone from reporting four relatively simple test parameters to 12 test parameters for proposed Category 6 cabling, many of which require bi-directional testing as in TIA/EIA-456B.

One of these additional tests was the requirement to measure the return loss of an installed link. But what is return loss really, and why we need to measure this parameter?


Return loss is a measured ratio of the transmitted signal power into the cabling link relative to the reflected power back into the source. The easiest way to understand this is to think of return loss as an echo that is reflected back by various changes in the impedance of a cabling link. What this means is that any variation in impedance will result in some of the original signal being returned back to the source. Real-life cabling systems will never have a perfect impedance structure, and will therefore always have some measurable return loss. (Please see Figure 1).

When looking at the test results a question sometimes arises: How much is a dB? As noted above, return loss is measured as a ratio. And that is exactly what a dB is — simply a ratio with a logarithmic twist.

In return loss measurements the dB is reported as 20 times the decimal logarithm of a voltage ratio. But why do we use dBs instead of just looking at the absolute voltage ratio? The dB logarithmic scale is handy because it covers a very wide dynamic range. You can see small variations, as well as large variations due to the logarithmic scale. If we have a 1 dB change in a dB value, this means a voltage ratio change of a little over 10 per cent. If a signal is represented as having a 6 dB loss, the level is approximately 50 per cent of the original voltage level.

For example the maximum allowable return loss for Category 5e permanent link at 100 MHz is 12 dB smaller than the originating signal. Converting back from the dB scale noted above, this equals 6 + 6 dB, and that is a voltage ratio of 1:4. Therefore, in this example, you may get one quarter of the signal back in the form of reflections. As we start to understand this concept, it should be clear that the higher the return loss value on a test report, the better the systems efficiency in transmitting the signal.


The end user may notice these return loss signals manifesting themselves as a slow network connection, or a connection that has erratic behaviour. Network protocol analyzers may see this as many TCP re-transmissions during data transfer. In the worst case scenario, the user may not even be able to log into the network.

Let’s consider two types of LAN applications. One type of LAN application sends the signals over a wire pair in only one direction (and a signal in the opposite direction on a different wire pair). Examples include 10BASE-T and 100BASE-TX where the return loss causes small changes in the high-frequency signal levels at the receive end. This is detected as noise in the signal detector in the network interface circuit (receiver) and is obviously undesired.

The second type of LAN application sends the signals in both directions at the same time on each wire pair and uses “traffic directors” (directional couplers) to separate outgoing and incoming signals. An example of this is 1000BASE-T (Gigabit Ethernet) where there would be additional noise by reflected electrons, which find their way back into the signal detector in the network interface, and is also undesired. (Please see Figure 2).


Connectors will cause reflections in both positive and negative directions (also known as bipolar reflections). These reflections will be approximately the same magnitude for each direction. (Please see Figure 3).

The first impulse is high if the impedance of the connector is high, or low if the impedance of the connector is low. Generally, return loss’ effect on connectors has more of an impact at high frequencies. Poor installations contribute to return loss problems, particularly at high frequencies.

Cable also introduces two types of reflections. Some reflections come from the beginning and end of the cable segment, resulting from a “mean characteristic impedance” mismatch, and others come from variations along the length of the cable. (Please see Figure 4).

As it turns out, the reflections that occur within the segment of the cable are generally not very serious and tend to show up at high frequencies, where the return loss limits are not so tight. The reflections at the beginning and end of the cable are generally the most serious in nature. The reason for this is that given an appropriate distance, the phase of the reflection at the remote end is turned around and can add up with the first reflection, in particular in the frequency range where the pass/fail limits are the most stringent. Most return loss problems originate from cable characteristic impedance mismatches.

In addition, reflections from cable mismatches occur at the same location as connector reflections. So, the unipolar type of reflection from a cable characteristic impedance mismatch combines with a bipolar type of reflection from a connector and becomes asymmetrical. (Please see Figure 5).

To compound the problem, all cable links also have attenuation, which is the signal loss from one end of the link to the other. If the component causing the return loss is deep into the system, the reflected signal will be reduced by the attenuation affect. If the reflection is close to the source (for example, in a wall jack where you plug in your patch cord), the reflected signal will not be attenuated, but the received signal from the far end will be.

In the 1000BASE-T example, this is like a “double whammy” where the transceiver is trying to receive an attenuated signal from the far end and, at the same time, remove a significant return loss reflection originating from signals sent at the local end.


The standard has applied a rule to ease the return loss measurements on links with low attenuation. It is acknowledged that these links will not be as adversely affected due to the significant signal strength that will be maintained from the far end source. This will usually have the most impact at the lower test frequencies where attenuation is less than 3 dB. The “3 dB rule” is a very simple concept and easily determined by measurement, which is implemented into the current certification testers for the Category 5e and proposed Category 6 standards.

As an example, imagine measuring a link with a return loss below the acceptable standards values (fail) for a particular frequency (for example, 77.6 MHz). You then find out that this particular link only has an attenuation loss of 2.6 dB at that particular frequency, so you can ignore this return loss measurement since attenuation is less than 3 dB. In fact we can ignore all return loss measured values where attenuation is less than 3 dB at that particular frequency. In this case the link would not actually fail; it would pass according to this 3 dB rule. (Please see Figure 6).

Most Category 5e cable runs shorter than 8.2 metres, or proposed Cat 6 cable runs shorter than 5.7 metres, may never be subject to return loss values. This is due to the fact that attenuation on these links is almost always less than 3 dB over the entire test frequency range. But do not be caught off guard — return loss measurements have caused a significant challenge for most normal length Category 6 cable runs.


Return loss is an important parameter for high-speed LAN network systems. Return loss measurements are challenging because they are extremely sensitive to properties of test interfaces.

Many manufactures of components and test equipment have provided a significant amount of data and feedback to the standards bodies. This has been done in an effort to make sure our current — and future — technologies will function properly if tested within certain limits.< P>To install the cabling components and not perform a post installation test would be ignoring the importance of reliable LAN operation for end users. The infrastructure, also known as Layer 1 of the OSI model, is what supports all of the signals that carry our critical data, and it is ultimately up to the network owner to make sure it has been properly certified for its intended use.CS

Brad Masterson, C.E.T., is Product Manager at Fluke Networks in Canada and a member of Cabling Systems’ Editorial Advisory Board.

Print this page