A new buzzword called user-centric testing is beginning to make the rounds in networking circles. While it will never replace device-centric testing methods, some view it as a paradigm shift that can improve overall system performance.
October 1, 2003
If you are going to build a network infrastructure for people, why not test with people to ensure that it will meet the rigors of today’s customer service expectations?
As today’s challenging market conditions have pushed network equipment manufacturers (NEMs) and carriers to deliver higher quality services to end-users, there exists an industry-wide need to shift focus from traditional “device-centric” testing to “user-centric” testing.
Testing “to the box” with plain packet flows is no longer satisfactory in defining device quality.
Telco carriers and ISPs recognize that assuring the end-to-end quality of their services is paramount to the quality of the individual devices that support their services.
NEMs discover that while the quality of their devices in isolation may be satisfactory, the post-deployed upstream and downstream interaction with other network equipment (potentially developed by other vendors) largely defines a customer’s satisfaction.
The engineered quality of a networked device is being overshadowed by the quality of a larger system. A review of the current device testing and monitoring strategy reveals why this is the case.
Development-time testing has traditionally followed what might be labeled a “device-centric” paradigm.
By subjecting a device to a number of use cases, device-centric measurements such as CPU and memory usage, throughput, packet processing latency and the like are gathered to extrapolate the device’s potential operational quality.
Device-centric testing has proven useful in determining the potential availability and scalability of a given system and the performance constraining components are generally simple to identify.
More intelligent and predictive device monitoring applications in LAN/WAN environments can now gather device-metrics of several inter-operating computers, correlating multi-device operating conditions and identifying performance problems that would have otherwise gone unnoticed.
Practical experience, however, shows us that in the world of telecommunications and the Internet, testing the quality of such widely distributed systems (many components of which we don’t even own) requires us to look elsewhere for quality measurements.
Accessing the device-metrics of a system as large and complex as the Internet is simply not feasible, nor would we most likely be able to comprehend the results. The challenge facing our industry (and arguable the IT industry as well) is how to accurately test, measure and manage quality in a heterogeneous and widely distributed system.
Telco carriers, being judged by the quality of the services they deliver to end-users are aware of the usefulness (and limitations) of the device-centric paradigm.
They appreciate the operational characteristics that device-metric testing and monitoring provides, as it is a language that we’ve all learned to speak. “32,000 PPPoE / PPPoA sessions at line rate for 40-byte packets” are some of the device-metrics necessary to take purchasing decisions to the next level.
What appears to be happening at that “next level,” however, is nothing less than a paradigm shift towards user-centric quality assessment of the activities that will eventually be encapsulated within those 32,000 sessions.
It may be reassuring to know that a user-centric testing paradigm is not marketing hype. It is a means to generating the realistic types of network-destined packets that our switches, firewalls, routers and multiplexers will be expected to process.
User-centric testing, as the name implies, is all about the user (otherwise known as “the guy who can’t log on”, “the employee who can’t access the corporate network via the VPN”, “the hacker who is launching DDOS attacks from a compromised web server”, etc.)
The point of user-centric testing is that during the entire day, individuals never stop to ask themselves what the quality of the devices supporting their activities is. Most operate in isolation and don’t worry about whether the DSLAM packet latency is higher than normal on a given day. They care about the quality of their online experience.
A choppy video stream, slow download or inaccessible Web site will reduce the perceived quality of a user’s experience. User-centric testing should help us determine how a given network device will contribute to this quality.
Any user activity, at the end of the day, manifests itself in the form of network packets and frames. The device under consideration might seem adept at handling various network packets and might provide the test team with the impression that the device will successfully deliver upon specified QoS attributes – such as latency, response times, and availability, in an integrated environment.
In light of this discussion, we may wonder how Test and Measurement vendors plan to assist us in evolving our quality assurance paradigm to include user-centric testing. If we listen closely to the marketing messages, we’ll find that there is an awareness of the necessity to enhance testing products and that the “real world” and “real users” traffic have something to do with it.
Regardless of the adjective used to describe traffic (“real world”, “real user”, etc.), it sounds interesting, because there seems to be an intrinsic value to things that are real. From an engineering point of view, however, “fake” traffic doesn’t exist, so you might stop and ask “What exactly are they talking about?” The traffic passing through our firewalls, routers and multiplexers is as real as it needs to be to induce CPU processing, encapsulation, transmission, routing table lookups, etc.
The emphasis on “traffic” may be due to our familiarity with the traditional device-centric paradigm, because traffic makes devices “do” things. Whether it be in the lab or in the “real world,” traffic is quantified via device-centric measurements and generates device-centric statistics that we can monitor.
A “real user” focus, on the other hand, does seem to relate to the user-centric metrics that we need in order to improve the overall quality of our devices. If we can maintain our focus on what real users do and allow the testing product to generate the associated traffic, then we can measure and evaluate the quality of user activities . Specifically, user attributes that should be considered in a user-centric test approach include:
Number of user interfaces (browsers, windows, etc.) operating concurrently;
User activity on each interface (web page download, ftp file upload, video stream display, etc.);
User Think Time between consecutive actions on each interface, and
Number of iterations of a user action on a given interface (e.g. a user downloading bank statements for multiple accounts sequentially)
Overlaying the above attributes on network packets in line with anticipated production behavior, allows for an accurate evaluation of the devices. A proactive understanding of the device allows network equipment manufacturers to showcase device functionality to carriers and service providers that is directly in line with their stringent customer requirements.
Furthermore, it allows carriers and service providers to provision services to their customers (end-users) with high deployment confidence and reduced customer support costs.
At the end of the day, quality is what our customers demand, and they base quality on their customers’ perceptions and buying trends.
Can or will existing device-centric testing products support and pursue the user-centric testing paradigm? Absolutely. Will it be marketing fluff? Sometimes.
Reviewing testing products in terms of the device and user-centric criteria, however, should make it quite simple to decipher what type of products you’re evaluating.
Are the tests defined in terms of expected traffic conditions or in terms of theoretical extremes?
The telecommunications industry recognizes that device-centric testing is mandatory and useful. Whether or not you subscribe to Moore’s law, Metcalf’s law, or Traver’s law, the need to identify the
extremes of a system will always exist.
As the capacity of heterogeneous networks continues to increase, we need to ensure that our developed products and services function properly under network traffic and load.
User-centric testing is not and should not be a replacement for device-centric testing. It is, however, a complementary process that enables us to more accurately assess the effective capacity and operational quality that our devices will support.
Sterling Pratz is the vice president of marketing and corporate development at interNetwork, Inc. based in San Francisco, Calif. He can be reached via e-mail at firstname.lastname@example.org or by phone at (415) 354-4900. The article was based on a white paper by Scott Cullinane of interNetwork, with contributions from Summer Palma, a senior editor with the company. Further information on user-centric testing is available at www.inw.com.