Connections +
Feature

A new model emerges

It's called utility computing and it allows IT departments to align resources and quickly adapt to changing demands.


January 1, 2005  


Print this page

The pace of doing business has escalated, competition has never been more intense and as a result, the most important trait for any successful organization’s long-term survival is adaptability.

That is why utility computing holds so much promise.

This new model was born out of an increasing reliance on electronic information. Software applications provide critical support for nearly all processes and they have to be managed well, a job that the utility computing model takes on directly.

But even with all the buzz, IT managers still lack a clear view as to how they can adopt it to help their organizations align computing resources with the changing needs of their business.

Utility computing represents a change in the way IT delivers applications for both private and public sector organizations.

Organizations want IT to run as dependably as their water or telephone service, thus the utility analogy.

Just like we expect running water, electricity and other utilities to work at home if we pay our bills, we all expect dependable IT services.

With a utility computing approach, companies benefit from reduced costs, increased efficiency and higher levels of availability and performance.

However, in order to implement it successfully, IT departments need to consider three primary areas of focus: availability, performance, and automation.

Optimized availability:

The first requirement of utility computing is that data and applications must always be available. Users need to be insulated from disruptive events ranging from server failures to a complete site outage.

And despite the fact that eliminating downtime is an industry preoccupation, “always-on” computing remains a challenge.

According to research firm IDC Corp., when disaster strikes, enterprises on average can expect to experience three to seven days of downtime per event. Falling hardware costs have made it possible for many companies to protect data with layers of redundancy, but that redundancy makes some IT structures more difficult to access.

What can IT managers do to take availability to maximum levels? First, they need to ask themselves some important questions:

Is all enterprise data backed up? The data in branch offices, remote offices, home offices, desktops and laptops is unquestionably valuable, but because of costs and logistical problems it is not usually backed up.

The utility computing model calls for centralized, automated, cost-effective backup of these resources.

How is data backed up and recovered? Data volumes mirrored at one or more remote sites can now be reliably replicated over IP networks to reduce the amount of data exposed to loss and to speed up disaster recovery.

Automated server provisioning eliminates error-prone manual recovery techniques. Clustering optimizes availability by automatically detecting application and database performance bottlenecks or server failure by moving these critical services to other servers within the cluster.

Utility computing also includes virtualization and pooling of storage resources, which enables IT departments to drive up storage utilization rates and reduce costs.

Storage virtualization reduces administrative costs by providing centralized control of heterogeneous resources from a single graphical user interface.

Effective data lifecycle management further reduces the costs of data availability by automatically migrating data to the most cost-effective storage medium and allowing enterprises to access it selectively for regulatory compliance.

Optimized Performance:

Utility computing includes the ability to optimize end user response times, improve the overall quality of service and detect and remedy causes of performance degradation, all in real time.

This requires tools that can implement the entire application stack from the Web browser or client application to the storage device, even in complex heterogeneous environments. If end user response times are lagging, IT staff can break them down tier-by-tier to pinpoint problems.

IT staff should utilize a dashboard-type client to send alerts and reports, giving them early warning of developing problems along with pointers to appropriate remedial action. As an example, if a database is running too slowly, storage management and storage networks can accelerate access to data to make it run faster.

Automation:

With the continued decline in hardware costs, people have become the greatest expense for any IT department.

Handling routine tasks in today’s evolving heterogeneous environments is a costly and unnecessary hassle.

Automating processes releases IT from routine tasks to focus on more strategic activity and application development. Automation should enable IT resources to adjust to changing needs without operator intervention.

But automation does more than free up costly staff members for more productive work, it also speeds up processes to improve availability, ensures that things are done right the first time and saves costs through more effective management of resources.

Here are several examples of what automation technology can do to bring the enterprise closer to the utility computing model:

* Virtualization and pooling of storage devices, driving up storage utilization and reducing hardware costs.

* Simplification of storage management by automating common tasks from simple graphical interfaces.

* Virtualization and pooling of compute capacity: Server utilization is notoriously low — at best, 20% — and applications vary over time in their need for processing. Drawing processing resources from a pool of servers drives up server utilization to align the needs of the business.

* Provisioning a second server anywhere in the world when a server, an operating system, or an application fails. Automated migration of the application makes the failover practically unnoticeable to users.

The road to utility computing has many facets and challenges, and enterprises can deploy components of a utility computing model in different ways. These challenges can make it difficult to determine where to start when considering a utility model.

While hardware is a necessary commodity to achieve this, it is the software that provides the functionality and the true value of the solution. This is especially true in an IT infrastructure that involves hardware from multiple vendors, which in itself has long been one of IT’s biggest challenges

The transition from a traditional labour-intensive, hardware-driven environment with a single vendor stack to a hetero- geneous, integrated utility computing environment will not happen overnight.

But by following a single utility computing vision built with software that’s implemented over time via “best of breed” building blocks, organizations can get on the road to utility computing.

Finally, consider these five steps to utility computing as a starting point:

* Discover — Figure out your actual IT usage is.

* Consolidate — Develop a shared infrastructure.

* Standardize — Choose a common way to manage the infrastructure.

* Automate — Take the costs out of IT so things will run automatically.

* Deliver — Get users to think of IT as a service rather than as a right.

Yesterday’s issues of security and capacity concerns will be eliminated, leaving resources available to address how to deliver on-demand compute resources that like a basic utility, operates in a cost-effective manner.

Fred Dimson is general manager and director of operations at Toronto-based Veritas Software (Canada) Inc., a provider of software and services that enable utility computing. He can be reached at fred.dimson@veritas.com.