Connections +
Feature

Change Is On The Way

Thankfully, The Data Centres Of The Future Will Be Vastly Different Creatures Than The Power-Guzzling, Administrative Headaches Of Today.


January 1, 2009  


Print this page

A computer, like anything else, works best when it is built and used for a specific purpose. Though far more complex than a hammer or saw, a computer is a tool just the same. And all tools must be designed to a task.

These days it is easy to forget this simple fact. As computer systems grow increasingly complex, particularly in the data centre, we find that many are being used in unintended ways — for example, running software they were never meant to run, or being housed in buildings that were designed for other purposes.

As hardware and software additions to the data centre require more and more connections between new and old technology, eventually the industry will hit a complexity wall in which data centres become unmanageable. The result will be widespread power and utilization inefficiencies at a time when energy and efficiency are at a premium.

That is why we believe that the data centres of the future will be vastly different creatures than the power-guzzling, administrative headaches of today.

In coming years, the hardware systems that occupy these massive compute farms will be designed in concert with the software they are intended to run.

Indeed, even the buildings that house the data centre will be custom-built for the type of workload and processes the systems will handle. These new data centers will use less power, produce better results and require less administration.

When it comes to data centres, complexity is the enemy of efficiency. And at the moment, thousands find themselves in the midst of mind-boggling complexity. One technology that holds the promise of greatly reducing that complexity is virtualization, the process that pools disparate computing resources — processors, memory, storage — to appear as one. But so far virtualization has been mostly about consolidating servers. This is helpful, and it improves utilization, but it does not cut down on software complexity or significantly reduce administrative costs.

There are three technologies that are emerging that will radically alter the virtualization landscape. One is the concept of a Virtual Machine Image, or VM Image.

A VM Image is the bundling of the operating system, middleware and application into a self-contained, fully operational package.

These images have instructions attached to them (metadata) that enable them to simply drop into a data centre environment, find the necessary resources and execute.

The second technology is VM Scheduling. This is akin to system provisioning, in which an administrator can decide when and where to run a particular VM Image.

It allows for rapid scheduling and prioritization of shared resources among other VM Images on the same system, for more dynamic and efficient environments.

The third technology is VM Mobility, which is the ability to move virtual images around the data centre while they are actually running, without skipping a beat. Though there is still work to be done on the standards and licensing fronts, these technologies have the potential to greatly improve the dynamism and efficiency of the data centre. This new world of virtualization will require some significant changes to the data centre architecture itself. The major new concept that will emerge is something we are calling an “ensemble.”

These ensembles are essentially collections of homogeneous hardware, or clusters that have systems management capability built in; everything from workload optimization to restart and recovery. The key to these ensembles is their autonomic abilities.

In other words, they will monitor their own utilization, heat production and power consumption, dynamically allocating re- sources as needed.

By using the principles of autonomic computing — Monitor, Analyze, Plan, Execute (MAPE) — these ensembles require very little by way of administration.

The goal of this re-architecting is to simplify the data centre. Though the dynamic scheduling of workloads is actually a fantastically complex process, the interface that is exposed to the administrator is quite simple.

By using the ensemble structure, managed by a service-oriented virtual machine interface, the data centre becomes a system of self-contained components that interact with each other on an as-needed basis. (see chart p. 22)

There is one more element that needs to be rethought before data centres can reach their full potential. Though it may seem sometimes that they exist only in the world of ones and zeros, they are actual physical structures that require tremendous amounts of power and cooling in order to operate.

In this way, data centres are not unlike factories. And the lease, maintenance and power consumption are all factored into the cost of finished goods.

Like a factory, there is an optimal efficiency that can be reached in a data centre by matching the machines to the building (or vice versa). By applying some of the same economic principles that measure the efficiency of factories to the economics of data centres, we have arrived at some surprising recommendations for optimizing data centres. For example, bigger is not always better. The cost benefits of scaling data centres begin to diminish if it is too large and requires too much electricity. Maximum efficiency points will develop based on the workload of the data centre and surrounding environment.

By optimizing all of the components that reside in this building block, and monitoring power and heat with sensors that feed back into the systems management capabilities discussed earlier, the data centre can attain maximum efficiency.

At a time when energy use carries heavy costs, both financial and environmental, every ounce of efficiency is highly valuable.

In short, the data centre of the future will be a much more integrated, purpose-built machine. The type of workload will dictate the design of everything from the software to the building itself.

And there may be a variety of different types of data centres, based on their respective purpose. Not as simple as a hammer, but just as efficient and effective. CNS

Laura Anderson is Program Director, Service Engineering, at the IBM Almaden Research Center in San Jose, Calif.

———

In short, the data centre of the future will be a much more integrated, purpose-built machine. The type of workload will dictate the design of everything from the software to the building itself.