Consolidation and virtualization are the new realities -so too are the network smarts needed to run it.
March 1, 2008
Industry observers of all stripes from equipment vendors to network managers agree that there is going to be a major evolution in the data centre landscape in the months to come. As legacy environments have rapidly become too big, complex and costly to run, companies are opting to build sleeker, more efficient data centres at an unprecedented rate. It is all in an effort to reduce overall overhead requirements and leverage the newest enabling technologies.
“What is really interesting is that we have never seen so many new data centres being built,” says Jason Reil, a data centre product service specialist with Cisco Systems Canada Co. in Toronto. “Businesses are finding it easier to ‘dump’ what they have and build it the right way. Most customers are either in the process of doing that or thinking about it.”
Consolidation and virtualization are at the top of every planner’s list of “must haves.” Modularity is integrated into all things, from servers to power outlets to allow for easier, faster migrations and upgrades and even wholesale moves if the need arises. The newest data centres are becoming a showcase of the “less is more” principle, as more and more processing power gets squeezed into an increasingly smaller footprint.
The growing reliance on IP for everything from voice to video is pushing existing network capacity to the limit. Today, the focus is on adapting to the 10 Gigabit Ethernet migration within the data centre environment. Tomorrow 100 Gigabit Ethernet will be integral to the network design process.
All eyes are also on energy consumption and greening of the data centre environment. Energy saving features are part and parcel of the latest hardware and devices, from servers and power outlets to heating and cooling systems.
Utility cost savings
Layouts and enclosure designs are being optimized for more energy-efficient cooling and heating of facilities. All of these measures are delivering dramatic reductions in utility costs to the tune of millions of dollars in annual savings.
At the heart of making all this work will be the built-in intelligence to manage the new networking complexities of the next-generation data centre. “All of this will make the management of networks substantially more difficult,” says Luc Adriaenssens, senior vice president of R&D and technology, CommScope Inc. in Richardson Tex. “Moving to intelligent network management is the only way to manage the new infrastructures down to the cabling level.”
Network planners, installers and managers are already facing a rapidly escalating demand for cross-disciplinary skills for monitoring and managing this new “virtual reality.” While the complexities of the new data centre environment grow, however, there is considerable comfort to be found in the fact that the career prospects are strong.
The data centre is one area that observers say is exempt from even the most austere budget cutting measures.
“The recession is having no impact on the data centre at this point,” says Adriaenssens. “If you look at all the growth projections around bandwidth compared to current data centre capabilities, retrofitting, adding or building a new centre is a priority for everyone. When a company is looking at cutting back and reducing growth in certain areas, the data centre is off limits.”
Consolidating and virtualizing for efficiency: The aggressive move to data centre replacement versus upgrades may seem a drastic measure.
In many cases, however, current utilization of computing and facility resources is very poor, and space along with the infrastructure to support legacy data centres is getting more costly to maintain with each passing month. Consolidation of resources into a smaller space can deliver significant top line hardware and energy savings right out of the gate.
An exemplar of the rationale behind consolidation, Sun Microsystems last year unveiled three of its “next-generation data centres in the U. S., the U. K. and India. Within the first three months of hardware consolidation efforts in its Santa Clara, Calif. operations, Sun was able to reduce the number of servers from 2,177 to 1,240; storage devices from 738 to 225; racks from 550 to 65 (resulting in 88% compression of square footage); and reduce power capacity demand from 2.2MW to 500KW ($1.1 million cost savings per year savings).
At the same time computing power increased by 456% and the company was able to avoid $9.3 million in proposed Phase Two construction costs.
Dean Nelson, director of global lab and data centre design service for Sun, explains that the firm was like many other organizations with large-scale legacy data centres. “Our 10-year-old infrastructure could not handle the high density (requirements of consolidation).”
A complementary driver behind the new data centre model is virtualization. In a recent discussion on the next generation data centre, Zeus Kerraval, SVP of Enterprise Research at Yankee Group outlined his vision of the data centre of the future in which all resources will be transformed from a physical to a virtual infrastructure.
“Memory, storage, processor, database on other computing resources will be virtual resources that can be called ‘on demand’ by whatever application requires it. These virtual resources will be comprised of pooled physical resources that can be across the data center, across the city or across the globe, but will look like a single resource to all applications.”
He points out that virtualization is already starting to have an impact on storage and will soon make its way into the network. According to Kerraval, the network will become “the backplane of the virtual data centre.”
Yankee Group studies, among others, have shown that current utilization of storage and servers stand at 25% and 30% respectively. Network utilization is also in the 30% range. It is estimated that virtualization for could improve server and storage utilization to well over 90%.
The idea of the virtualized network is still in its early stages, but according to Benoit Chevarie, vertical market manager for Belden in Montreal, we can expect to see movement in that direction very soon. “Cisco’s recent Nexus 7000 Series announcements is bringing virtualization to the switch level. That will have an impact on cabling since boxes will need to talk together in a virtualized environment. It will also push the demand for higher bandwidth communication links between machines to track information transfer from server to server.”
10 gig times 10: It is evident that a successful transition to consolidation and virtualization relies on having a networking infrastructure that can withstand the added capacity demands and deliver the reliability needed for it all to work. “Virtualization and the aggregation of IP technology demands more bandwidth,” says Reil. “That is because the whole model is changing to one that opens up the network and eliminates the need to manage different ones.”
A more open model in turn means greater integration skills requirements and the power to drive the traffic. Hence, the growing demand for 10 Gigabit Ethernet in the data centre and emerging interest in 100 gig as the next step forward.
“It is clear that 10 gig is the future,” says Adriaenssens. “10 gig to the server should be on everyone’s planning horizon, if it is not already. Within three years you will not see anything other than 10 gig within a reasonable sized data centre, so if you do not put the right cabling infrastructure in now, you will be looking at huge upgrade costs later.
“If you are not putting in 10 gig to the server, you are missing the boat — especially when you are migrating to virtualized service where you are aggregating multiple applications. “If you think of the planning horizon, cabling is the longest since you design for 15 or more years out.
The key is to look at what you need tomorrow. In fact, 100 gig
needs to be part of that planning horizon, since it’s in process being standardized by IEEE and everyone is pushing hard to get it out there quickly.”
He adds that Cat 6A cabling for 10 Gigabit Ethernet on copper cabling over 100 metres has already reached double digit growth and he expects those numbers will continue.
Chevarie points out that the recent TIA approval for 10 gig over copper will make 2008 a pivotal year for implementation.
“That makes 10 gig more affordable and removes some constraints associated with driving virtualization in the data centre. I see a lot of people deploying both copper and fibre for future-proofing.”
Cabling professionals will be wise to get up to speed on the applicable testing and installation requirements for the new medium since there are important distinctions in managing these high-speed networks. “It is a much more demanding medium and will require enhanced skill sets,” says David Green, director of Marketing for Fluke Networks Canada in Mississauga. “The new test parameters are more stringent and information on the new standards is limited. Cabling also has different properties that need to be taken into account during layout and installation. The fact is, some cabling installations today can already support 10 Gigabit Ethernet, so the first step is to confirm that before investing in large scale infrastructure replacement.”
Getting to know the ropes: With virtualization, bandwidth isn’t the only issue. One of the biggest challenges ahead is managing it all. While hardware requirements have shrunk, knowing what’s running where over the network will be a significant challenge.
“Virtualization is putting more pressure on the network,” says Reil. “We are already starting to see that. If you don’t know where your storage or servers are, that’s the first bottleneck.”
Kerraval notes that given the network is the pervasive element within the virtualized data centre and the key to allowing applications to connect to multiple pools of computing resources, “More will be expected out of the network. Maintenance windows will shrink or be nonexistent so the network needs to be able to dynamically provision resources when and where they are needed. Lastly, the network will be the policy enforcement engine that will ensure that data centre resources are being used as efficiently as possible.
“Intelligent infrastructure management will help operators run centres more efficiently and drive processes,” says Adriaenssens. “It is quicker and far more reliable to have a regimented process when things go wrong, and to have the tools to recover quickly.”
At the Toronto Hydro Telecom facility for example, everything from door entry systems and cabling performance to network activity and HVAC systems are monitored 24/7 through a state-of-the-art network operating centre running HP OpenView for monitoring and ServiceDesk for tracking events.
“Everything is fully monitorable,” says David Dobbin, president, Toronto Hydro Telecom Inc. “Even the power bars in the cabinets. If there is a power surge, we immediately know about it and can swap out components without disruption.”
Chevarie agrees that with all the infrastructure changes that can happen in a virtualized environment, having the tools in place to document network activity is absolutely critical.
“You need a piece of software that keeps your log books up to date on all the physical changes so you can follow what happens between this and that port.”
Another helpful management tool, says Chevarie, is pre-terminated jacks in order to get things up and running as quickly and efficiently as possible.
This level of efficiency applies equally to the energy consumption part of the data centre equation. “There are a great deal of efficiencies that can be gained through virtualization,” says Chris Loeffler, product manager, Eaton Data Center Solutions in Raleigh, N. C. “But to do that you need to look at how to measure that efficiency. If you can’t measure what’s going on, you can’t improve it, even if you have invested millions in green technology.” He adds that information gathering is key. “You really need to have information coming from your power source, your UPS, the power strips in the racks themselves, and the heading and cooling systems. Even with all that raw data, you need to aggregate it to get actionable information. What happens if you consolidate and you have load running at 90%? That could introduce spots to worry about; or require a redesign to optimize heating and cooling functions.”
Doing more with less: Budget and environmentally conscious enterprises today are discovering that there is a great deal of additional productivity and performance to be gained through the efficient deployment of data centre resources.
With the drive to consolidation and virtualization, projects that once took years to complete can be deployed in a matter of months or sometimes weeks. Reduced hardware and energy requirements are delivering millions of dollars in savings.
At the same time, the focus will be on the networking infrastructure to live up to the promise that these enabling technologies can bring.
Denise Deveau is a Toronto-based freelance writer. She can be reached at firstname.lastname@example.org.
The data centre is one area that observers say is exempt from even the most austere budget cutting measures.
Cabling professionals will be wise to get up to speed on the applicable testing and installation requirements for the new medium since there are important distinctions in managing these high-speed networks.
Agility will be the primary measure of data centre excellence by 2012: Gartner
The next five years will see agility become the primary measure of data-centre excellence, the research firm Gartner has predicted.
Speaking recently in London, England, analysts advised that through 2012, virtualization will be the most significant factor on data centres.
It greatly reduces the number of servers, space, power and cooling demands and ultimately enables agility.
“An agile data centre will handle exceptions effectively, but learn from exceptions to improve standards and processes,” said Tom Bittman, Gartner vice-president. “Agility will become a major business differentiator in a connected world. Business agility requires agility in the data centre, which is difficult as many of the technologies for improving the intelligence and self-management of IT are very immature, but they will evolve over the next 10 years.”
Gartner defines agility as the ability of an organization to sense environmental change and respond efficiently and effectively.
However, no organization will be agile if its infrastructure is not designed for agility.
Bittman said: “Agility is the right strategic balance between speed and operational efficiency.”
As a core enabler of agility, virtualization is the abstraction of IT resources in a way that makes it possible to creatively build IT services. While the vast majority of large organizations have started to virtualize their servers, Gartner estimates that currently only 6% of the addressable market is penetrated by virtualization, a figure set to rise to 11% by 2009.
The number of virtualized machines deployed on servers is expected to grow from 1.2 million today to 4 million in 2009.
“Virtualization changes virtually everything,” said Bittman. He explained that it is not just about consolidation but also includes transitioning resource management from individual servers to pools, increasing server deployment speeds up to 30 times.
Gartner warned that tools alone are not a substitute for a good process and made the following recommendations to organizations planning or implementing virtualization:
• When looking at IT projects, balance the virtualized and unvirtualized services. Also look at the investments and trade-offs;
• Reuse virt
ualized services across the portfolio. Every new project does not warrant a new virtualization technology or approach;
• Understand the impact of virtualization on the project’s life cycle. In particular, look for licensing, support and testing constraints;
• Focus not just on virtualization platforms, but also on the management tools and the impact on operations.