Connections +
Feature

Carrier Ethernet Test Methods

Simultaneous testing of differentiated Ethernet services is no longer an option for carriers racing to meet high customer expectations for reliability and quality of service


September 1, 2007  


Print this page

Today’s Ethernet networks carry a variety of data, voice, and video services alongside network management traffic.

Ensuring quality of service (QoS), while maximizing throughput is critical to ensure proper performance and a high quality customer experience, however, traditional testing methods fall short in their ability to monitor, analyze and identify problems. For turning up VLAN, MPLS, and IP-based services, and to keep service quality and revenues up, a multi-stream, multi-port, multi-service testing model is required. As triple play services become pervasive among end users and the demand for commercial Ethernet traffic continues to grow, network providers are under increasing pressure to update aging architecture. Competition between traditional telephony and cable providers in the Ethernet space means networks must be turned up and generate revenue as quickly and cost-effectively as possible.

Proper network pre-qualification and monitoring is required to ensure that service quality is maintained from the headend, through the network backbone, and to customer premises as more customers are added and more services are offered.

Evolution of Ethernet Services

Traditionally, the bulk of data services provided by telecommunications companies have been point-to-point, running for example from one business campus to another or from one business data center to another.

These early Ethernet services were relatively straight forward to turn-up, monitor and maintain as the majority of testing was isolated to a single dark fiber, a single wavelength on a DWDM network or channel in an Ethernet over SONET/SDH network.

Performance was verified by completing hard loop back tests running at data rates of 10 Mbps, 100 Mbps, or even 1 Gbps based on the bandwidth sold.

Service verification of Ethernet services used to be a simple task. Many services were sold as a fixed bandwidth such as 100 Mbps without any class of service designation, only a promise of bandwidth, which in most instances was not guaranteed.

Service providers did not provide traffic grooming or policing to ensure the QoS, they simply installed the network, verified end-to-end throughput, made a note of the roundtrip delay and performed troubleshooting when customers complained.

However, as the demand for Ethernet services and bandwidth continues to increase, network providers are under pressure to conform to standards established by the ITU and Metro Ethernet Forum (MEF) and to add more capacity without impacting existing services.

Today’s network topology has evolved beyond point-to-point as network providers are delivering retail customers with voice and video services and Internet access.

Because these services all reside on the same network, each consuming a portion of the shared bandwidth, they all must be qualified at the same time. These three types of traffic have very different requirements, and stress the network in a unique manner. Figure 1 (below) shows a typical carrier network showing Gigabit Ethernet, DWDM, and Ethernet over SONET/SDH access points.

To make QoS measurements, using two test sets, tests are run from the customer premise to the central office, video headend, etc.

Voice, for instance, requires extremely low latency, typically 50 milliseconds from end-to-end. Any delay or gap in transmission is instantaneously noticeable by the users. In a generation of packet-based and mobile telephony, latency is perhaps even more important than the sound quality of voice, meaning a few dropped packets is more acceptable than any form of packet delay.

Video presents a completely different set of challenges. With video, extremely large amounts of data are transported from a central video server or headend to the customer premise and ultimately to the television set. Home video delivery is not a real-time application like video conferencing (which has the same latency requirements as voice calls).

Small changes in latency go unnoticed and are typically smoothed over by constant buffering on the set. The biggest challenge with sending large amounts of data over a long period of time is ensuring every packet is delivered. If packets are dropped, the picture can pixelate, freeze, or disappear altogether.

Internet traffic and non-critical data services are the least stringent in terms of service level parameters when compared to voice and video.

If a few frames are lost, traditional TCP/IP mechanisms adequately manage data restoration provided the frame loss stays under 1% or so.

Latency is also an issue, but in many cases, the primary source of delay is not with the Ethernet service, but with the ISP and Internet servers themselves. For mission-critical data services such as Storage Area Networks (SAN), latency and packet loss must be minimized.

Traffic must be verified

In addition to customer services, it is important to verify network management traffic. This traffic signals devices such as routers and switches and directs communications between them. In all this traffic may only account for about 1% of all network traffic, yet it is critical that it is transmitted reliably, and typically has the highest priority of all traffic.

Services are differentiated in the network a number of ways, depending on the architecture. With VLAN-based services including those using stacked VLAN tagging or Q-in-Q, each service is assigned a VLAN tag and a user priority. The priority is a number from 0 to 7 based on traffic type (see table).

MPLS is similar. IP-based services utilize the Type of Service (TOS) field in the IP header. Of course, some networks combine one or more of these technologies.

Evolution of Testing Methodology: Traditional test sets have been focused on testing one piece of this puzzle at a time, testing each type of service separately.

With new triple play services and the need to generate multiple types of traffic simultaneously, these test sets no longer provide adequate test coverage. Because differentiated services now exist side-by-side on the network rather than on their own dedicated fibers, wavelengths, or TDM tributaries, they must be tested simultaneously.

To verify differentiated services, the test set must generate traffic streams in the same format or formats (VLAN, MLPS, IP) used by the network architecture for each of the service types.

The priority and/or TOS for each traffic stream must be specified, based on the appropriate class of service. At the far end, the test set must then separate out each traffic stream and perform QoS measurements on each stream. (See Figure 2.)

While each individually tested service may meet QoS standards, the increase in traffic load caused by multiple services will negatively impact performance. The services should be tested not only at their maximum subscribed rate, but also at higher rates to verify network policing and load management.

To verify that a network element can maintain QoS, it too must be loaded to full capacity. Rather than test through a single ingress and egress port, testing a single customer connection, the device must be tested through all ports. Even if the full port capacity of the device is not being utilized, the performance must be characterized upon installation so that the future capacity can be gauged.

The three key metrics to characterizing differentiated services are frame loss, latency, and packet jitter or packet delay variation.

When one or more packets of data traveling across a network fails to reach its destination, the effect upon the service may be minor, but may lead to further service degradation as packets are resent.

Service testing should not only verify that the ratio of lost frames falls within the acceptable limits defined by the class of service, but also verify that higher-priority services have a lower loss ratio than lower-priority ones.

Latency is the delay between the time a frame is transmitted and when it is rece
ived. Low latency is critical for voice as described, as well as for Storage Area Networks (SAN) over Ethernet, where increased latency requires larger buffer-to-buffer credits. It also negatively impacts TCP sessions, where increases in latency have a profound effect on throughput.

Packet jitter or packet delay variation is the difference in the time of arrival of the packets.

For classic data applications, jitter is easily managed and not a key parameter. But for voice and video, jitter becomes a critical parameter that must be tested and verified to ensure quality of service.

In some cases, bit error ratio (BER) is used as a QoS metric due to its traditional importance to TDM networks. BER is calculated by taking the ratio of errored data bits received to the number bits transmitted.

While an interesting measurement, it can be misleading as it is possible, for example, to measure a BER of zero on all received frames and still have a data loss of 97%. For this reason, Ethernet service metrics do not rely on BER testing.

With the increasing importance of Class of Service standards, Carrier Class Ethernet certification, and real-time applications, assuring QoS is a critical element in offering revenue-generating Ethernet services.

This assurance comes from properly testing all of the differentiated services using multi-stream traffic generation and prioritization techniques that had not played a large role in traditional point-to-point services.

Furthermore, special attention must be paid to the key QoS metrics and matched against the Service Level Agreement. By adopting test procedures focused on differentiated services testing, network providers can be confident that they are satisfying the needs of their customers today and building a reliable source of revenue for the future.

Patrick Riley is the product marketing manager at Sunrise Telecom, a San Jose, Calif.-based manufacturer of communications test and measurement products. The company’s Canadian operation is based in Montreal.

Table 1: VLAN User Priority

Priority Traffic type
1 Background
0 (Default) Best Effort
2 Excellent Effort
3 Critical Applications
4 “Video,” < 100 ms latency and jitter
5 “Video,” < 10 ms latency and jitter
6 Internetwork Control
7 Network Control

The above table shows the typical VLAN user priorities and the associated traffic types.