Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ещё одна посылка от Водяхо / [Judith_Hurwitz,_Robin_Bloor,_Marcia_Kaufman,_Fern(BookFi.org).pdf
Скачиваний:
45
Добавлен:
03.06.2014
Размер:
3.57 Mб
Скачать

Chapter 15

Virtualizing the Computing

Environment

In This Chapter

Seeing how virtualization evolved

Knowing how virtualization works

Dealing with management issues

Moving virtualization to the cloud

Why are we putting virtualization and cloud computing together in a discussion of service management?

Virtualization (using computer resources to imitate other computer resources or even whole computers) is one of the technical foundations of cloud computing (providing computing services via the Internet). We think that these two concepts are important to the data center and its destiny.

In this chapter, we present an overview of virtualization: what it means and how it is structured. We follow that discussion by explaining cloud computing. We also look at how the combination of virtualization and cloud computing is transforming the way services are managed.

Understanding Virtualization

Many companies have adopted virtualization as a way to gain efficiency and manageability in their data center environments. Virtualization has become a pragmatic way for organizations to shrink their server farms.

180 Part IV: Nitty-Gritty Service Management

Essentially, virtualization decouples the software from the hardware. Decoupling means that software is put in a separate container so that it’s isolated from operating systems.

Virtualization comes in many forms, because one resource is emulating (imitating) another resource. Here are some examples:

Virtual memory: PCs have virtual memory, which is an area of the disk that’s used as though it were memory. In essence, the computer more efficiently manages virtual memory; it simply puts information that won’t be used for a while on disk, freeing memory space. Although disks are very slow in comparison with memory, the user may never notice the difference, especially if the system does a good job of managing virtual memory. The substitution works surprisingly well.

Software: Companies have built software that can emulate a whole computer. That way, one computer can work as though it were actually 20 computers. If you have 1,000 computers and can reduce the number to 50, the gain is very significant. This reduction results in less money spent not only on computers, but also on power, air conditioning, maintenance, and floor space.

In a world in which almost everything is a service, virtualization is a fundamental mechanism for delivering services. Indeed, virtualization provides a platform for optimizing complex IT resources in a scalable manner (in a way that can grow efficiently), which is ideal for delivering services.

We can summarize the nature of virtualization with three terms:

Partitioning: In virtualization, many applications and operating systems (OSes) are supported within a single physical system by partitioning (separating) the available resources.

Isolation: Each virtual machine is isolated from its host physical system and other virtualized machines. One virtual-instance crash doesn’t affect the other virtual machines. Data isn’t shared between one virtual container and another.

Encapsulation: A virtual machine can be represented (and even stored) as a single file, so you can identify it easily based on the service it provides. In essence, the encapsulation process could be a business service. This encapsulated virtual machine can be presented to an application as a complete entity. Therefore, encapsulation can protect each application so that it doesn’t interfere with another application.

Chapter 15: Virtualizing the Computing Environment 181

A short history of virtualization

IBM introduced virtualization in the early 1960s to enable users to run more than one operating system on a mainframe. Mainframe virtualization became less relevant to computing needs in the 1980s and 1990s. Indeed, in the 1990s, companies stopped worrying about the efficiency of the computer platform because computers were getting so powerful.

For more than a decade, IT organizations expanded the capabilities of their data centers by adding more and more servers. Servers had become so cheap that each time a new application was added, it was easier to buy a new server than to try to share resources with other applications. Eventually, organizations realized that the chore of maintaining, upgrading, and managing a large and growing number of servers was getting out of hand. The number of support-staff employees required to operate the data center was climbing swiftly, so the manpower cost of maintaining the data center (as a percentage of the total cost) was rising. At the same time, other costs were growing in an unpredicted manner, particularly the costs of electricity (to power the computers), air conditioning (to cool them), and floor space.

Scheduling a revolution

One of the main problems was that the servers that people had been happily adding to their networks were running horribly inefficiently. In the days of the mainframe, great efforts were made to use 100 percent of the computer’s CPU and memory resources. Even under normal circumstances, it was possible to achieve better than 95 percent utilization. On the cheap servers that IT departments had been deploying, however, CPU efficiency was often 6 percent or

less — sometimes as low as 2 percent. Memory and disk input/output (I/O) usage were similarly low.

This situation seems almost insane until you realize that applications simply don’t require a great deal of resources, and with the servers that were being delivered by the time the year 2000 rolled around, you didn’t put more than one application on a server. Why? Because the operating systems that everyone bought — Windows and Linux, mostly — didn’t include any capability to schedule the use of resources effectively between competing applications. In a competitive hardware market, vendors began increasing the power of servers at an affordable price. Most of these servers had more power than typical applications needed. The same inefficiencies of Windows and Linux didn’t address the efficiency problem, however. If an organization decided to stay with older but lower-powered hardware, it couldn’t find people to maintain those aging platforms.

So if you had an application that only ever needed 5 percent of a current CPU, what were you going to do other than provide it with its own server? Some companies actually used old PCs for some applications of this kind, maintaining the PCs themselves, but there’s a limit to the amount of old equipment that you can reuse.

The solution to this squandering of resources was adding scheduling capability to computers, which is precisely what one IT vendor, VMware, introduced. Adding scheduling began to change the dynamics of computer optimization and set the stage for the modern virtualization revolution. The mainframe is dead; long live the mainframe!

182 Part IV: Nitty-Gritty Service Management

Using a hypervisor in virtualization

If you’ve read about virtualization, you’ve bumped into the term hypervisor. You may have found this word to be a little scary. (We did when we first read it.) The concept isn’t technically complicated, however.

A hypervisor is an operating system, but more like the kind that runs on mainframes than like Windows, for example. You need one if you’re going to create a virtual machine. One twist: The hypervisor can load an OS as though that OS were simply an application. In fact, the hypervisor can load many operating systems that way.

You should understand the nature of the hypervisor. It’s designed like a mainframe OS because it schedules the amount of access that these guest OSes have to the CPU; to memory; to disk I/O; and, in fact, to any other I/O mechanisms. You can set up the hypervisor to split the physical computer’s resources. Resources can be split 50–50 or 80–20 between two guest OSes, for example.

The beauty of this arrangement is that the hypervisor does all the heavy lifting. The guest OS doesn’t have any idea that it’s running in a virtual partition; it thinks that it has a computer all to itself.

Hypervisors come in several types:

Native hypervisors, which sit directly on the hardware platform

Embedded hypervisors, which are integrated into a processor on a separate chip

Hosted hypervisors, which run as a distinct software layer above both the hardware and the OS

Abstracting hardware assets

One of the benefits of virtualization is the way that it abstracts hardware assets, in essence allowing a single piece of hardware to be used for multiple tasks.

The following list summarizes hardware abstraction and its management:

File system virtualization: Virtual machines can access different file systems and storage resources via a common interface.

Соседние файлы в папке Ещё одна посылка от Водяхо