Connections +
Feature

Cover Story: how much is enough?

As demanding network applications dramatically increase your bandwidth needs, how do you know when you have enough -- or too much? By analyzing current and future network needs and keeping tabs on the current load on your networks, you can make sensible decisions about the infrastructure you should have in place.


November 1, 2001  


Print this page

When the first Ethernet networks appeared, nearly 20 years ago, their 10-megabit-per-second (Mbps) bandwidth seemed like more than enough. After all, Ethernet’s most popular predecessor, Arcnet, had offered less than two Mbps. But that was then.

This is now, and the pressures of more and more ambitious network applications have driven that 10 Mbps technology to 100 Mbps, to one gigabit, and — though the standard is still in development — to 10 gigabits per second (Gbps).

Network cabling kept pace, evolving to the current Category 5 Enhanced (5e) standard capable of handling gigabit transmission, not to mention optical fiber that can support the fastest networking technologies developed, and then some.

All this is fine, but even though the bandwidth available today seems cheap when you consider what such capacity would have cost 20 years ago (if you could have had it at any price), it is still not free. And while bandwidth needs have grown dramatically over the past two decades, they may not continue to rise at the same rate.

“It’s fair to say the bandwidth requirements are continuing to increase,” says Bart Leung, senior associate at Ehvert Technology Services in Toronto, “but the actual rate of increase is perhaps slowing down.”

Combine those facts with an economic downturn, and simply piling on all the capacity you can get does not look like the wisest possible move. So how do you know what you really need?

The answer lies in capacity planning, a discipline not as widely or as carefully practised as it should be. By analysing current and future network needs and keeping tabs on the current load on your networks, you can at least make reasonable decisions about the infrastructure you should have in place.

WHAT IS TYPICAL?

The typical Ethernet configuration today — and Ethernet is the standard in more than 90 per cent of organizations — is 100-megabit service to the desktop and Gigabit Ethernet in the backbone, with an eye to eventually upgrading desktop connections to gigabit capacity, Leung says.

For most organizations, in-building wiring is not the bottleneck. Unless your cabling is quite old or your networking requirements are unusually demanding, the cabling you have is probably adequate — at least for your current needs.

“If you have Category 5 today, there’s no need to rip it out,” says Stephen Chow, senior manager at KPMG Consulting in Toronto. Category 5 cabling comfortably handles 100-Mbps service to the desktop, which is as much as most people need for the time being. Chow is referring to wiring to the desktop — of course network backbones between floors and buildings are a different matter. Ideally these should be fiber, but organizations with Category 5e or even Category 5 copper may be all right for a while, depending on their needs.

There is a difference between what you can live with and what you should install if you are fitting out new space, though. While Chow considers Category 5 cabling adequate for the near term, he advises installing the highest current standard when running new cable. Today that would mean Category 5e, which is the recommended cabling for Gigabit Ethernet.

The next standard, Category 6, is not yet final. This standard will most likely get final approval in early 2002. As soon as Category 6 is official, Chow says, those planning new network installations should adopt it.

In fact, it could make sense to go with the new standard before it is quite final, provided you stick with major vendors. “The major vendors are contributing to the creation of the standard, and as such will be engineering their products to fall within or exceed the specifications,” Chow explains. “The smaller vendors will not have this insight, nor will they have the research and engineering dollars that are needed to influence the development of the standard.”

An even more advanced standard, Category 7, has attracted little attention in North America so far, and network users here can probably safely ignore it for the time being.

As for the network backbone, Chow recommends single-mode or multimode fiber between floors of a building or between buildings in a campus. But forget fiber to the desktop. “Fiber to the desktop is to my mind quite wasteful,” he says. Very few individual users need the speed of fiber; the bottleneck is usually the PC itself rather than the physical network.

THE BIG SWITCH

Category 5 or better cabling can provide 100-Mbps speeds to the desktop, but its ability to do so depends on the network hardware at either end of the wire. The network interface card (NIC) in the desktop PC and the switch or hub at the other end determine whether the cable is used to its full potential.

Chow says a 100-Mbps switch will meet most users’ needs today. If your installation still uses 10-Mbps switches or hubs, or even 100-Mbps hubs, it is probably time to upgrade. In a new installation, Chow advises, 100-Mbps switching is good enough for now, but at least consider your upgrade path to Gigabit Ethernet. If the hardware is modular enough that Gigabit Ethernet ports can be swapped in later, that is sufficient “future proofing” for most of us.

A few years ago, bandwidth to the desktop was generally shared. That meant several clients relied on the same 10- or 100-Mbps link to a central hub. Switching gained popularity as bandwidth requirements grew. Switching dedicates a connection from each desktop to the switch, so the client has the full 10 or 100 megabits of bandwidth to itself.

Heavily used network resources, such as application servers, need more capacity. For instance, you do not want to put an application-hosting server on a 10-Mbps port, comments Leung. Fine-tuning the equipment on the network can also make a difference, he adds.

Additionally, the network’s layout can make a big difference to its performance. Segmenting a network keeps local traffic within a segment, so it does not add to the load on the rest of the network. Often, Leung says, a capacity crunch can be solved simply by changing the way a network is segmented. Traffic studies can help network managers understand where the bottlenecks are and rearrange segments to get better performance.

On the desktop, a 100-Mbps NIC is the wise choice for new installations. There is little need to think far ahead here, since desktop PCs tend to be replaced every three or four years anyway. And some users may be quite happy with older 10-Mbps network cards. It depends, after all, on what demands the user puts on the network.

MAKING PREDICTIONS

Of course those demands depend mostly on the applications that are running. The reason networks have so much more capacity today than they did in the mid-1980s is that we do so much more with them. And forecasting what will be needed over the next few years depends largely on predicting what you will be doing with your network.

In the early 1990s client/server computing boosted demands on the network by increasing traffic between the client, which ran the user interface, and the server, which did the bulk of the heavy processing. Then in the mid-1990s came the Internet. E-mail enjoyed a sudden rise in popularity, and people began surfing the World Wide Web. Instead of most network traffic being local — co-workers at adjacent desks exchanging files — more traffic began going to and from the Internet. That meant a bigger load on network backbones and on links to the long-haul network.

What’s next? For harried network managers, the oncoming freight train is the multimedia possibility of Internet Protocol (IP) networks.

Widely seen as a way to make networks more efficient by combining voice and data, voice over IP will mean the resulting single network will need more capacity than before. “Voice over IP is the biggest area right now where the corporate networks now need to grow to accommodate this new volume of traffic,” says Dominique D’Opera, an executive consultant with Montreal-based information technology consulting firm CGI Group Inc.

The demands should be manageable. Francis Sung, president and owner of Tone Performance Communications Inc
., a Toronto-based networking contractor, says Category 5e cabling should be sufficient to carry both voice and data to the desktop.

QoS PROVISIONS

When you mix voice with data, though, bandwidth is not the whole story. To avoid packet delays or latency that cause voice signals to break up, a network needs either a lot of spare bandwidth or quality of service (QoS) provisions that ensure voice packets get through quickly by giving them priority over data. This requires hardware and software that can distinguish among the different types of traffic and give priority to what absolutely must get there now, D’Opera explains.

If convergence is carried even farther, to video traffic, the bandwidth requirements go up rapidly and QoS becomes even more important. That may well happen, because cost-conscious businesses are looking to videoconferencing using Internet Protocol (IP) as a way to bring down travel costs, says Chow. “We have to get serious about how we engineer our networks and we have to start incorporating quality of service,” Chow says.

Network-attached storage (NAS) — in which large storage devices are connected directly to the network rather than hanging off servers — may also have an effect on network needs. NAS installations tend to be used where there are very large amounts of data to handle, says Leung. These installations require robust, resilient infrastructures to work properly. At the least, NAS needs plenty of bandwidth connecting it to the network.

KEEPING UP

The key to keeping up with bandwidth requirements is to monitor network activity and anticipate future needs. Large organizations do this fairly well, says D’Opera, but many smaller ones do not.

Chow says too many networking people do not really understand performance analysis. That means they fail to make good use of the tools available to them and they wind up putting out fires rather than stopping them before they begin.

“When they first design a network they design it for an initial need,” says Domenico Didiano, an executive consultant at CGI, “and they forget about it until it breaks.”

Chow says many organizations have tools that let them monitor exactly what is happening on their networks, but use them only when problems appear. Instead, they should monitor traffic all the time and use the data to decide what upgrades and tweaks the network needs to keep it running smoothly.

If the network is running at 50 per cent of capacity, Chow says, “you probably don’t need to look at any upgrades in the immediate future — maybe just some tuning.” At 60 per cent, you should think about what upgrades will be needed to keep up with future needs. At 75 per cent, you probably should contemplate a network redesign. Everything may be working fine most of the time, but a sudden spike caused by a big application such as video streaming “could kill the network,” Chow warns.

The City of Brampton, Ontario steadily monitors the use of all of its information technology resources, including desktop computers, servers and the network, says Chris Moore, the city’s chief information officer. “Our goal is to be proactive and stay ahead of the wave,” he says.

But Moore says capacity planning takes more than just watching what is happening on the network. The city now has staff dedicated full-time to planning IT infrastructure, and while understanding the technology is part of the job, so is understanding business needs, Moore says. “They have to actually pursue business partners within the organization,” he explains — to find out what the city’s future needs are and what new applications will result that will affect infrastructure requirements.

That is the ideal: anticipate new requirements and provide for them. “Proactive improvement is better than reactive improvement,” Didiano says.

With solid data on network usage and a sense of where the organization is going, you at least have a sound basis for planning the network’s evolution. Nothing, though, will eliminate the choices that have to be made, namely the network configuration that would be ideal for current and future needs versus the inevitable budget constraints.

“It’s still a balancing act,” Leung says. Technology may advance, but some things never change.

Grant Buckler has written about information technology and telecommunications since 1980. He is now a freelance writer and editor living in Kingston, ON.


Print this page

Related