Connections +
Feature

Corporate Guessing Games

There is a wealth of options available to network managers when it comes to developing a roadmap for the future. The biggest challenge will be figuring out which path to take.


November 1, 2010  


Print this page

It appears there are a lot of things keeping corporate network managers up at night these days. Virtualization, immersive video, security and surveillance and cloud computing are all driving up the demand for bandwidth — and a whole new set of headaches that go along with trying to keep pace.

Given the broad spectrum of corporations, network demands may differ. Some IT groups are fully embroiled in supporting day-to-day videoconferencing and in-house data centre management; others in managing the ins and outs of outsourcing and mobile workforces. While some are focusing their strategies on single backbone infrastructures; others are starting to see segmentation as a means to manage the multi-faceted traffic streams running over their networks. There are those with deep pockets to engage on network overhauls; while the have-nots are focusing on offloading applications and optimizing what they have.

Following is what some industry experts have to say to some of today’s corporate networking questions.

The big picture: “We’re seeing a lot of CEOs and CIOs trying to figure out how to break down departmental silos and providing tools to allow internal and external collaboration,” says Jeff Seifert, chief technology officer for Cisco Systems Canada Co. in Toronto. “We’re also seeing employees getting applications from the consumer world and workforces expecting the latest and greatest in mobile technologies. Organizations are having a hard time deciding what to allow users to bring to the network and how to do it in a safe and secure way.”

Immersive telepresence is quickly gaining ground as a means for organizations to improve collaboration and reduce costs. It’s also pushing bandwidth requirements beyond the norm.

As organizations move to offshoring applications and enabling remote workers, the virtual desktop concept has become increasingly important, Seifert adds.

Social responsibility and carbon neutral programs continue to be a big factor, especially in the data centre environment. And everything over IP has become a commonplace theme as organizations put more and more devices onto networks, including physical security, surveillance and building systems.

Mobile is also breaking down the networking borders, putting an increasing demand on network managers to secure all additional systems coming onto networks, from telephones to tablets. 3G and 4G are increasingly being considered as wireless backup options to primary landlines.

In addition, wireless is playing a bigger role in driving machine-to-machine communications. “We have already connected people so they can work more efficiently; the next phase is connecting devices,” says Dragan Nerandzic, chief technology officer for Ericsson Canada Inc. in Toronto.

“In the next 10 years there will be about 50 billion connections around the world between devices that have the capabilities to process information.” On the corporate front that will entail the management of energy consuming devices, building controls, and transportation and material handling functions. It will also require the right wireless broadband technology to support it.

“The next generation emerging in wireless is long term evolution (LTE),” Nerandzic says. “This will expand the role of wireless to controlling devices related to business and allow for their efficient use.”

Future proofing is becoming a guessing game at best. Corey Copping, marketing manager for HP Networking in Toronto, says network managers are increasingly pressured to make the right decisions. “We are finding that CIOs and network managers are having a hard enough time keeping things up and running, let alone planning for future projects. They are constantly worrying if their networks are scalable and able to handle all the trends that are driving bandwidth demand. There’s just too much coming at them.”

Peter Sharp, senior communications consultant with IBI Group in Toronto, agrees that most of today’s network planners are overwhelmed by managing change today. “There are enough choices in the IT area today that make true strategic planning extremely difficult. The top challenge is simply choosing a product. Many are driven by the ‘last man in’ effect; in other words, basing their decisions on what the last vendor tells them will save money, whether that’s servers, hosting or building systems.”

Cloud and virtualization: IT managers are also struggling with what to outsource to the cloud and what to keep closer to home. These are decisions that have a significant impact on network design and security, Seifert says. “Those looking to a hybrid model have to consider where they are hosting their applications due to privacy concerns. At the same time, they want to find ways to turn services up more quickly through virtualization or offshoring.”

“Whether data resides in the cloud or your own data centre, the challenge lies in understanding how to map your network securely and separate departmental applications – for example, human resources versus building management systems.”

Network planning and design in corporate facilities is being driven by the need for high performance computing, while reducing costs, energy consumption and the impact on the environment, notes Chin Choi-Feng, director of Data Centre PLM for CommScope Inc. in Richardson, Tex.: “Virtualization and cloud computing are having a big impact on cabling infrastructures in terms of bandwidth, performance, robustness and reliability.”

Given that virtualization and cloud computing are approaches that entail the sharing of resources, visibility into the network is increasingly critical, Choi-Feng adds. “Intelligent infrastructure management through tools like iPatch can be of great value.”

She adds that the ratification of the OM4 standard fiber, which allows for longer distances, promises to play a key role in meeting those requirements. “While equipment is refreshed every three to five years, cabling will likely last longer. We have 20-year warranties for a reason. The best thing a manager can do is put forth the value proposition for 10BaseT and OM4 cable so they can be ready to support more bandwidth when the time is comes.”

Richard Mei, director of research and development for CommScope, notes that new IEEE 802.3ba standard for 40GBaseT and 100GBaseT networks are unique in that it uses parallel optics. The iPatch offering is designed to enable the use of multiple fibers for one channel polarity. “Basically it makes sure the network manager doesn’t have to worry about any issues around the physical layer.”

Virtualization is also changing the nature of switching within the data centre environment. HP Networking’s FlexFabric networking architecture module, for example, enables the removal of all the cabling normally used between switches and servers and bringing it back to one cable from the network to the server core, Copping explains. “Information can then be shared virtually over all switches. That translates into less cable, fewer rackmount switches, and the flexibility to add ports and modules as you need them.”

He claims that this approach can reduce the total cost of ownership for a switch by as much as 60% through reduced power consumption, cooling requirements, acquisition costs and maintenance, while doubling bandwidth and power.

To each his own: Convergence is now being superseded by segmentation as video applications continue to grow. Network segmentation is gaining ground as a viable means to manage bandwidth needs and separate mission critical applications more effectively.

Immersive telepresence, streaming (enterprise versions of YouTube) and IP-based surveillance each bring their own bandwidth needs, Seifert explains. “These are very different types of video applications. Immersive telepresence for example can push current networks a fair bit. We’re at a point where we need to differentiate the types of video alone through segmentation, along with policies around who has access to what components.”

As Mei explains, “i
f Ethernet traffic and content is invisible to the network switch, it will not distinguish what is in the packet. As long as it reaches the destination with reasonable latency, it will not see anything different. Now you need to think in terms of different content (e.g. voice, video or video on demand) that ride on top of certain protocols.”

“You have to be very careful in how you design your network infrastructure,” Sharp advises. “There can be conflicting demands on volume and speed. Surveillance for one is particularly bandwidth hungry. It’s a huge problem trying to provide bit steams with entirely different performance expectations on a common platform. If you separate the traffic streams into what the enterprise operates and the facility operates, it’s easier to manage.”

He contends that convergence is a “false white knight. I’m all for the idea of using convergent-friendly technology, for example camera, voice and security on IP. But it doesn’t mean it all has to run on the same IP backbone. Just because it’s a similar technology it doesn’t mean you have to use the same highway. You want your building and management and security system to operate independently of your enterprise system. If you take down your enterprise system for an upgrade, that is not the time to have your security and building management turned off. You want your facility to continue to operate even if the enterprise system is down.”

And finally … If that wasn’t enough, there is one other challenge corporate network managers can expect to face in the near future: IPV6 migration. “While this has been on the horizon for some time, IP addresses will soon be in very short supply,” Seifert explains. “So far, people have been able to do workarounds, but time is running out. IPV6 will solve some of today’s network challenges since it will allow all devices to have native IP addresses versus using translation boxes.”

Network managers definitely have their plates full when it comes to planning their network strategies in the months to come. The good news is there is a wealth of options available to them when developing a roadmap for the future. The biggest challenge will be figuring out which path to take. CNS

Denise Deveau is a Toronto-based freelance writer. She can be reached via e-mail at denise@denised.com.


Cleaning up your fiber act

While virtualization is becoming a top priority on many CIO agendas, Ron Groulx, product manager for Fluke Networks Canada Inc. in Mississauga says there are still a number of challenges when it comes to keeping data centre cabling running clean.

“When data centres were more distributed environments, problems were associated with breaks, bends, connection and termination issues. Recent studies have shown that most problems are actually caused by dirt and contaminants in the fiber.”

This is especially problematic as we see an increasing number of fiber strands running in single cables. “One cable the size of a Cat 6A can run 48 strands of fiber which is why we are seeing more of it in data centres,” Groulx explains. “The assumption when handling fiber has been that it was tolerant enough to handle some dirt. While that was true in the past, that’s no longer the case. What you need are the necessary inspection tools.”

Tier 2 certification testing requires an OTDR (optical time-domain reflectometer) to gain a view of the fiber and connection points. “It really helps the technicians understand the fiber route, where the connection points are, light loss points, and the problem parts,” Groulx says.

It is important to note that Tier 1 testing still holds value, he adds. “It is still the most accurate approach for measuring fiber. Do not assume that if you just do Tier 2 testing you are good. A fiber inspector or loss meter should be used first; then the OTDR to gain more clarity. As more fiber is being run in data centres, technicians really need to understand it a bit more.”


VITP systems speaking the same language

The Vancouver Island Technology Park (VITP) opened in late 2002. Previously a hospital facility, the 61,000 sq. metres site was subsequently purchased by the University of Victoria in 2005 and now houses 35 tenants.

Since its inception as a corporate facility, VITP has focused on installing advanced technology to attract tenants, boost revenue opportunities, and reduce the park’s environmental footprint. Dale Gann, president of technology parks for the University of Victoria, says that creating a fast, stable network core that can support a broad range of IP-based applications has been a key strategic component.

“The question we always ask ourselves is how can we use technology innovation to run the building in a cost-effective way? In other words, how can we best manage our ‘fourth utility’ (i.e. the network)?” he says.

VITP was able to use the network pieces already in place and “tie the pieces we needed to provide advanced services to tenants of any size, from small startups to large companies and to make it all scalable,” Gann explains.

Among other products, VITP is using Cisco Network Building Mediator to support an application by Pulse Energy. The Pulse Energy dashboard displays real-time energy consumption information within the park, which can be used to identify possible areas of improvement and help control the park’s environmental footprint.

Beyond energy consumption, the Cisco gear also provides the visibility needed to manage all other IP-based building systems, from heating and cooling to integrated phones and security, Gann says. “All building systems with IP addresses can be put into a common language and rules applied across the entire platform.”

He adds that this ‘universal language’ approach is highly effective in a building with legacy system because it is easy to add on extras such as car access control, motion sensors, cameras, etc. and have everything come back as ‘clean’ language to the Mediator box for tying together.

In moving all systems to an IP-based model, Gann says it has become necessary to segment security and telepresence traffic from voice and data. “You don’t want latency in your security feeds or telepresence components,” he explains. “Security especially is a big draw on a network.”

As part of its ongoing upgrades, VITP continues to replace any analogue products that reach the end of their lifecycle with an IP-based version. “That goes for just about anything,” Gann says. “We need to make sure that whatever we add to the network is readable and can be viewed, managed and communicate with the Mediator.”

“Before we had all kinds of systems going to all kinds of different boxes, and we couldn’t apply any rules or procedures,” he says. “Now that’s no longer a problem because all our systems speak the same language.”