
- •About the Authors
- •Dedication
- •Authors’ Acknowledgments
- •Contents at a Glance
- •Table of Contents
- •Introduction
- •About This Book
- •Foolish Assumptions
- •How This Book Is Organized
- •Part I: Introducing Service Management
- •Part II: Getting the Foundation in Place
- •Part VI: The Part of Tens
- •Icons Used in This Book
- •Where to Go from Here
- •Knowing That Everything Is a Service
- •Looking at How the Digital World Has Turned Everything Upside Down
- •Implementing Service Management
- •Managing Services Effectively
- •Seeing the Importance of Oversight
- •Understanding Customers’ Expectations
- •Looking at a Service from the Outside
- •Understanding Service Management
- •Dealing with the Commercial Reality
- •Understanding What Best Practices and Standards Can Do for You
- •Using Standards and Best Practices to Improve Quality
- •Finding Standards
- •Getting Certified
- •ITIL V3: A Useful Blueprint for Enterprise Service Management
- •Seeing What Service Management Can Do for Your Organization
- •Starting with the Service Strategy
- •Creating a Service Management Plan
- •Defining a Service Management Plan
- •Automating Service
- •Getting to the Desired End State
- •Four Key Elements to Consider
- •Federating the CMDB
- •Balancing IT and Business Requirements
- •Measuring and Monitoring Performance
- •Making Governance Work
- •Developing Best Practices
- •Seeing the Data Center As a Factory
- •Optimizing the Data Center
- •Managing the Data Center
- •Managing the Facility
- •Managing Workloads
- •Managing Hardware
- •Managing Data Resources
- •Managing the Software Environment
- •Understanding Strategy and Maturity
- •Seeing How a Service Desk Works
- •Managing Events
- •Dividing Client Management into Five Process Areas
- •Moving the Desktop into the Data Center
- •Creating a Data Management Strategy
- •Understanding Virtualization
- •Managing Virtualization
- •Taking Virtualization into the Cloud
- •Taking a Structured Approach to IT Security
- •Implementing Identity Management
- •Employing Detection and Forensics
- •Encrypting Data
- •Creating an IT Security Strategy
- •Defining Business Service Management
- •Putting Service Levels in Context
- •Elbit Systems of America
- •Varian Medical Systems
- •The Medical Center of Central Georgia
- •Independence Blue Cross
- •Sisters of Mercy Health System
- •Partners HealthCare
- •Virgin Entertainment Group
- •InterContinental Hotels Group
- •Commission scolaire de la Région-de-Sherbrooke
- •CIBER
- •Do Remember Business Objectives
- •Don’t Stop Optimizing after a Single Process
- •Do Remember Business Processes
- •Do Plan for Cultural Change
- •Don’t Neglect Governance
- •Do Keep Security in Mind
- •Don’t Try to Manage Services without Standardization and Automation
- •Do Start with a Visible Project
- •Don’t Postpone Service Management
- •Hurwitz & Associates
- •ITIL
- •ITIL Central
- •ISACA and COBIT
- •eSCM
- •CMMI
- •eTOM
- •TechTarget
- •Vendor Sites
- •Glossary
- •Index

Chapter 11: Managing the Data Center 131
Useful facility-management KPIs
Other KPIs are likely to be useful. You can use the concept of a unit workload measurement, for example, based on calculating how an average application will behave with a typical or average level of service. For many years, Bill Gates thought of the power of a PC in terms of the Basic language compiler he once wrote as a measurement unit. We suggest that you do the same thing, using a typical business application.
Here are some KPIs that are likely to be worth tracking:
Power costs (by unit workload)
Average number of operational staff members (by unit workload)
Average number of support staff members (by unit workload)
Average software support costs (by unit workload)
Average hardware support costs (by unit workload)
Average data storage costs (by unit workload)
Average server use (in terms of memory and CPU use)
Average workload per square foot of floor space based on facility costs and efficiency
Keeping KPIs like these makes it possible to roughly allocate IT costs back to specific departments in the organization.
A disaster-recovery plan should cover issues such as e-commerce processes that touch suppliers and customers; e-mail, which is often the lifeblood
of business operations; online customer service systems; customer and employee support systems; various systems that support the corporate infrastructure, such as sales, finance, and human resources; and research and development. Depending on the company’s resources and income sources, management may need to consider other factors in a disaster-recovery plan.
Managing Workloads
The third layer in Figure 11-2 (refer to “Optimizing the Data Center,” earlier in this chapter) groups those processes that relate to managing workloads: from an IT perspective, managing the corporate IT resource. Being simplistic, we could say that you have applications that you need to run and resources to run them, so all you’re really doing is managing those workloads.

132 Part IV: Nitty-Gritty Service Management
A few decades ago, in the age of the mainframe, workload management really was like that. It boiled down to scheduling jobs (often by writing complex jobcontrol instructions) and monitoring the use of the computer resource. In those days few, if any, workloads ran around the clock; thus, workload management was a scheduling activity involving queuing up jobs to run and setting priorities between jobs. Some workloads had dependencies — a specific outcome from one program might alter what needed to be done next — but all dependencies usually could be automated via the job-control language.
In today’s far more complex world, many more applications need to run, and many more computers exist to run them. Some workloads are permanent, running all the time. In most companies, e-mail is such an application. Quite a few companies also have Web-facing applications with resource requirements that can fluctuate dramatically. Virtualization capabilities make it possible (to some degree) to create virtual resource spaces. On its own, the World Wide Web increased the number of dependencies among applications, and when Web Services standards were created, the number of dependencies increased. SOA makes matters worse.
So workload management involves recording known dependencies among programs and applications — an activity that provides useful information to the configuration management database (CMDB), as noted in Chapter 9 — and scheduling those workloads to run within the available resources. This process has to be flexible so that an application’s resources can be boosted when transaction rates escalate and reduced as those rates decline. In addition, a host of support functions have to run in conjunction with the business applications, including monitoring software and (where appropriate) backup jobs.
Application self-service
Increasingly, companies are giving users, customers, and partners direct access to applications that support everything from ordering to status inquiries. Customers and users really like to be able to access these resources directly, but this type of direct interaction with applications complicates workload management because it makes predicting future workloads harder. Behind self-service applications, you need the usual well-orchestrated set of service management capabilities.
Application self-service normally is automatic — that is, whenever the user requests a service, the process happens instantaneously, without any human intervention. To realize this level of sophistication, application self-service has to have three processes happening behind the scenes:
Identity management capability (to make sure that the user has the authority to access the application)

Chapter 11: Managing the Data Center 133
A portal interface (to make it easier for the user to access specific components or data elements)
Resource provisioning capability to execute the user request (to bring the requested application resource to the right place at the right time)
If you’re familiar with SOA, you may recognize the focus on components — and indeed, self-service applications are similar. Most SOA implementations work precisely this way. In such cases, services usually are recorded and made available in some sort of service catalog, perhaps called a service registry/ repository in SOA. This catalog may simply be a list of what applications
a user can choose to run, and it may be automated to the point where the user simply selects a capability that is immediately made available through a portal. The service catalog could work in many ways, perhaps providing pointers to applications. If an application were sitting in a portal interface, the catalog could direct a user to that application.
If you want to know a lot more about SOA, we recommend that you take a look at Service Oriented Architecture For Dummies, 2nd Edition (Wiley). We think it’s a great book, even if we did write it!
One risk in implementing self-service capabilities is creating an unexpected level of demand for resources, especially when Web-based applications are readily available to customers. If a major weather situation causes customers to reschedule flights online, for example, airline systems may experience a major unanticipated spike in access. As more and more customers rely on self-service applications, the workload management environment supporting application self-service needs to be highly automatic and sophisticated.
IT process automation
Implementing an efficient flow of work among people working on the same service management activity and teams working on related activities is one of the primary keys to optimizing the efficiency of the data center. We refer to the design and implementation of these workflows as IT process automation.
It’s difficult to understate the contribution that the intelligent use of IT process automation can make. We can draw a clear parallel between this process and integration infrastructure, which we discuss in Chapter 9. The function of integration infrastructure is integrating all the service management software applications so that they can share data effectively and don’t suffer the inherent inefficiencies of silo applications. Similarly, the function of IT process automation is integrating service management activities and processes so that they work in concert.

134 Part IV: Nitty-Gritty Service Management
Workload automation
From the automation perspective, IT process automation implements workflows that schedule the progress of activities, passing work and information from one person to another, but it also involves integrating those workflows with the underlying service management applications that are used in some of the tasks.
To continue our analogy of the data center as a factory (refer to “Seeing the Data Center As a Factory,” earlier in this chapter), IT process automation is about designing the flow of activities so that they happen in a timely manner and keep the production line rolling, whether those activities involve fixing problems that have occurred or commissioning new hardware and software to add to the data center. The ideal situation would be not only to have all important service management processes occurring as automated workflows, but also to have a dashboard-based reporting system that depicts all the activities in progress in the data center, providing alerts if bottlenecks arise.
Such a reporting system could also report on the data center’s important KPIs, providing a real-time picture of the health of the whole data center.
Managing Hardware
Figure 11-2 (refer to “Optimizing the Data Center,” earlier in this chapter) groups the following service management processes on the hardwareenvironment level:
Desktop and device management
Hardware provisioning
Virtualization
Network management
Most of these processes are discussed in other chapters, so except for the last two, we intend only to introduce them in the following sections.
Desktop and device management
The desktop traditionally has been managed as almost a separate domain by a separate team outside the data center, with the primary KPI being

Chapter 11: Managing the Data Center 135
the annual cost of ownership (including support costs). This situation has changed recently, for two reasons:
1.Desktop virtualization is feasible now.
2.Increasing need exists to manage mobile devices — whether those devices are laptops, smartphones, or mobile phones — as extensions of the corporate network.
Hardware provisioning and virtualization
Traditionally, hardware provisioning was relatively simple. Hardware was bought, commissioned, and implemented with the knowledge that it would be designated for a specific application for most, if not all, of its useful life. Eventually, it would be replaced.
With the advent of virtualization, the provisioning of hardware became more complex but also more economical. Today, virtualization and hardware provisioning are inextricably bound together.
Network management
Network management constitutes the set of activities involved in maintaining, administering, and provisioning resources on the corporate network. The corporate network itself may embrace multiple sites and involve communications that span the globe.
The main focus of network management activity is simply monitoring traffic and keeping the network flowing, ideally identifying network resource problems before they affect the service levels of applications. In most cases, the primary KPI is based on network performance, because any traffic problems on the network will affect multiple applications.
An asset discovery application (which we mention in Chapter 9 in connection with the CMDB), if it exists, normally is under the control of the network management team because the application is likely to provide important data.
The network management team is likely to work closely with the IT security team because members of IT security will be the first responders to any security attacks that the organization suffers.
Recent innovations in network technology are likely to change some network management processes. Previously, for example, the capacity of a network was controlled by the capacity of the physical network. Now, major network