Connections +
Feature

Engineering & Design – Above and Beyond Standards Compliance

There are several issues beyond what the standards require that must be taken into account when evaluating high-performance cabling systems.


November 1, 2000  


Print this page

Unshielded twisted-pair (UTP) cabling is the most popular type of cabling installed commercially in North America today. UTP’s relatively low cost, and its capability to provide gigabit per second throughput rates, have made it the system of choice for most IT managers.

And its prospects look bright. The completion of the enhanced Category 5 standard and the pending completion of the Category 6 standard have helped to ensure that UTP will continue to maintain its dominance for the near future. These new standards are the latest developments in qualifying UTP cabling systems for higher and higher transmission rates.

But what does it really mean to qualify a cable or connector for standards compliance to a specific category? Are there other things to consider when looking at a UTP cabling system’s performance?

STANDARDS COMPLIANCE MEASUREMENTS

For UTP cabling systems, standards compliance is the cornerstone of acceptance in an increasingly competitive marketplace. Standards compliance for UTP systems is based on the effects passive components and assembled channels have on signals at various frequencies. The goal of the standards is to limit the signal distortion and noise introduced by the cabling system itself. The development of the Category 5e and Category 6 standards has continued to expand our knowledge of the effects that the cabling system has on the transmitted signal, especially at higher frequencies. Even as these documents are completed, it should be recognized that the TIA standards are living documents that are continually updated and revised.

The parameters that are measured for standards compliance can be put into two categories: signal loss and noise. Attenuation, or Insertion Loss as it is now called, measures the amount of signal strength that is lost as it travels down the transmission path. Near End Crosstalk (NEXT), Far End Crosstalk (FEXT), and Return Loss measure the amount of noise that may be induced on any given pair within the channel. All of these measurements evaluate the interactions that occur within the cabling system itself; they focus on the effects that each of the passive components have on the signal as it travels the channel.

ATTENUATION TO CROSSTALK RATIO

A popular performance reference for UTP cabling systems has been Attenuation to Crosstalk Ratio (ACR). ACR is simply the difference between the NEXT value and the attenuation value at any given frequency in the frequency range of the system. Plotted on a chart, the ACR of Category 5 and Category 6 limits would appear as it does in Figure 1. The distance between the two lines is the ACR.

The highest frequency at which the ACR remains positive, or the frequency at which the ACR is equal to zero, is sometimes quoted as the upper limit of the usable bandwidth for the system. This is the frequency at which the attenuation and the crosstalk values are equal. The reasoning is that this is the frequency at which the signal has been attenuated to be the same magnitude as the crosstalk noise generated by adjacent pairs.

This implies that ACR is equivalent to the Signal to Noise Ratio (SNR), which is an important value for communications systems. By implying that a system can not handle frequencies beyond the zero ACR point, it is assumed that the noise in the system is due only to NEXT, and that the signal is reduced only by the attenuation of the cabling system.

The assumption that ACR is equivalent to SNR falls short when examining what happens in the case of a four-pair transmission system like Gigabit Ethernet. In this system, all four pairs transmit simultaneously in both directions. NEXT is not the only source of noise in the system; there are other sources such as FEXT and signal reflections due to impedance mismatches. In addition, attenuation is not the only signal reducer; the same reflections that contribute to the noise in the system also reduce the transmitted signal power as it travels down the channel.

ACR has also been related to the bit error rate of a system. Annex J of the 568-A standard states: “an ACR of 12 dB to 16 dB is considered the practical limit to ensure acceptable BER.” This would imply that the bandwidth of a system is limited to the frequency at which the ACR is 12 dB or 16 dB. However, the point at which ACR=16 dB for Category 5 is 50 MHz and the point at which ACR=16 dB for Category 6 is 117 MHz. Using ACR to define the bandwidth for Category 5 and Category 6 as 50 MHz and 117 MHz, respectively, would not be realistic.

In addition, there are times when simple standards compliance may not guarantee optimal network performance. Figure 2 shows a plot of the return loss measurements for a 305-foot Category 5 channel versus the TSB-95 recommended limit. All of the individual components in this channel conformed to the Category 5 component requirements, and this plot shows that this channel also met the TSB-95 return loss recommendations.

Figure 3 shows a plot of the impedance versus distance measurements for the same channel. Notice that the impedance values for the first 20 feet of the channel are significantly different than the impedance of the rest of the channel. These impedance mismatches may cause significant signal reflections in the network and increase the noise level at the receiver on the near end of the cable. This may increase the bit error rate of the network and reduce network performance.

TRUE NETWORK PERFORMANCE

There are issues beyond what the standards require that should be examined when evaluating true network performance. It should be kept in mind that the cabling system is only a part of an active network. As the standards committees developing the Category 6 standard have discovered, there are more parameters that can affect network performance than are currently examined by today’s standards.

True network performance should be measured in throughput, which is limited by the channel capacity. Research done in the 1940s by Claude Shannon, a Bell Labs engineer, resulted in the discovery that a digital network’s channel capacity is related to the usable bandwidth and SNR ratio of the system. This relationship between the channel capacity, the bandwidth and SNR is defined in what is now known as Shannon’s Law. It is expressed in the following equation: C = W*log2(1+S/N).

Shannon’s Law states that the channel capacity (C) is equal to the usable bandwidth (W) multiplied by the log base 2 of the quantity one plus the signal to noise ratio (S/N).

When looking closely at Shannon’s Law, it is apparent that as the usable bandwidth increases, the channel capacity also increases. In fact, the usable bandwidth has the biggest effect on the total channel capacity. In addition, as the SNR increases, the channel capacity also rises. By controlling the amount of signal degradation and limiting the amount of noise contributed by the cabling system, it is possible to raise the channel capacity. Therefore, channel capacity (and usable bandwidth) is essentially governed by signal integrity and total noise in the system from all sources.

Four basic things in a network affect signal integrity and noise: the transmitted signal, the receiver’s capabilities, the cabling system and external influences.

TRANSMITTED SIGNALS

The transmitted signal that is used to test for standards compliance is a very different wave form than the signal that is actually transmitted by a network. For example, when examining the signals in time-domain, it is apparent that the signal generator used for standards compliance testing produces a sinusoidal waveform at a single frequency (Figure 4). The frequency of this signal is changed in steps to test the entire frequency range in question.

On the other hand, when looking at a digital MLT3 waveform (Figure 5) in time domain, such as the signal used in a 100BASE-TX network, a very different signal can be seen.

Standards compliance testing involves examining the response to a single frequency, one frequency at a time. With a digital signal, there are actually multiple frequencies being generated simultaneo
usly; these vary from moment to moment depending on the symbols (or bits) being sent. As different frequencies have a different response within the cabling system (for instance, lower frequencies are attenuated less than higher frequencies), the effect the cabling system has on the digital signal may be very different from the effect on the analog sine wave.

It is known that the cabling system will never improve the signal. Looking at the same MLT3 signal from the previous example, and taking into account the cumulative affects of attenuation, crosstalk and ambient noise, the received signal can be seen in Figure 6. This signal is very different from the transmitted signal.

SIGNAL TO NOISE RATIO

It is a fact that as the frequency increases in UTP cabling systems, signal losses and noise levels from the cabling system increase. In addition, the SNR ratio in a cabling system is equal to the signal strength at the receiver divided by the total noise from all sources. Thus, the SNR decreases as the frequency increases. The usable bandwidth of UTP systems for a given SNR will depend on how well the channel limits signal distortion and noise produced by the system.

A cabling system that limits the amount of signal loss and noise that it introduces at higher frequencies will have a higher useable bandwidth for a target SNR. However, all noise sources must be taken into account when measuring the SNR of a cabling system.

SNR AND BIT ERROR RATES

SNR is also related to bit error rates (BERs) within networks. There are several complicated equations that are used to determine the SNR required to maintain a given BER, depending on the encoding of the signal. Without examining actual equations, it can be generally stated that a better SNR yields a lower BER for a given bit rate. In addition, a lower BER allows for maximum capacity/throughput of the dynamic network. The fewer the number of errors, the fewer retransmissions will be required.

As for recent guarantees of “zero” bit error rates, a word of caution: zero BERs do not exist in the real world. A BER measurement is dependent upon the number of errors detected in a finite number of bits sent. The fact that errors did not occur in an individual test does not mean that no errors will ever occur.

Error rate calculations are actually based on the probability of errors occurring. The probability is never zero. If the noise level is sufficiently small, the SNR for a given system becomes large enough that the probability of an error is small (10-12 or 10-14), and no appreciable degradation of network performance will occur. This is especially true when the BER is orders of magnitude smaller than required for the network type.

LOOKING AT ALL FACTORS

Standards compliance looks only at the passive channel components. The interactions between these passive devices are still not completely understood, and the standards committees are learning something new every day. The analog frequency-based evaluations used for standards compliance may miss things that can affect true network performance. True network performance evaluations must take into account all factors that affect signal integrity and noise sources.

Cabling systems that do the best job of limiting the signal distortion and noise introduced by the cabling system itself, will provide better SNR, lower BERs and maximize dynamic network performance.

Todd Harpel, RCDD, has more than 12 years of experience in communications cabling infrastructure design and specification. He is currently the marketing programs manager for Berk-Tek of New Holland, PA (an Alcatel company), a leading manufacturer of copper and optical fiber communications cable products.


Print this page

Related