Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ещё одна посылка от Водяхо / [Judith_Hurwitz,_Robin_Bloor,_Marcia_Kaufman,_Fern(BookFi.org).pdf
Скачиваний:
46
Добавлен:
03.06.2014
Размер:
3.57 Mб
Скачать

Chapter 15: Virtualizing the Computing Environment 183

Virtual symmetric multiprocessing: A single virtual machine can use multiple physical processors simultaneously and thus pretend to be a server cluster. It also can emulate a fairly large grid of physical servers.

Virtual high availability support: If a virtual machine fails, that virtual machine needs to restart on another server automatically.

Distributed resource scheduler: You could think of the scheduler as being the super-hypervisor that manages all the other hypervisors. This mechanism assigns and balances computing capability dynamically across a collection of hardware resources that support the virtual machines. Therefore, a process can be moved to a different resource when it becomes available.

Virtual infrastructure client console: This console provides an interface that allows administrators to connect remotely to virtual center management servers or to an individual hypervisor so that the server and the hypervisor can be managed manually.

Managing Virtualization

To manage virtualization, you must keep track of where everything is, what everything has to accomplish, and for what purpose. You must also do the following things:

Know and understand the relationships among all elements of the network.

Be able to change things dynamically when elements within this universe change.

Keep the placement of virtual resources in step with all the other information held in the configuration management database (CMDB). Given that few organizations have anything approaching a comprehensive CMDB, that’s asking for a lot. In fact, the CMDB needs to know how all service management capabilities are integrated. (For more information on the CMDB, see Chapter 9.)

Foundational issues

Managing a virtual environment involves some foundational issues that determine how well the components function as a system. These issues include how licenses are managed, how workloads are controlled, and how the network itself is managed. The reality is that IT sits between the network’s static

184 Part IV: Nitty-Gritty Service Management

virtualization and the dream of full automation. We discuss some foundational issues in the following sections.

License management

Many license agreements tie license fees to physical servers rather than virtual servers. Resolve these licenses before using the associated software in a virtual environment. The constraints of such licenses may become an obstacle to efficiency.

Service levels

Measuring, managing, and maintaining service levels can become more complicated simply because the environment itself is more complex.

Network management

The real target of network management becomes the virtual network, which may be harder to manage than the physical network.

Workload administration

Set policies to determine how new resources can be provisioned, and under what circumstances. Before a new resource can be introduced, it needs to be approved by management. Also, the administrator has to be sure that the right security policies are included.

Capacity planning

Although it’s convenient to think that all servers deliver roughly the same capacity, they don’t. With virtualization, you have more control of hardware purchases and can plan network resources accordingly.

IT process workflow

In virtualization, the workflow among different support groups in the data center changes; adjust procedures gradually.

Abstraction layer

Managing virtualization requires an abstraction layer that hides and manages things between the physical storage subsystems. The virtualization software needs to be able to present the whole storage resource to the virtualized environment as a unified, sharable resource. That process can be more difficult than it sounds. All the administrative functions that you’d need in a

Chapter 15: Virtualizing the Computing Environment 185

physical data center have to be deployed in a virtualized environment, for example. Following are some of the most important considerations:

You have to factor in backup, recovery, and disaster recovery. Virtualized storage can be used to reinforce or replace existing backup and recovery capabilities. It can also create mirrored systems (duplicates of all system components) and, thus, might participate in disasterrecovery plans.

You can back up whole virtual machines or collections of virtual machines in any given state as disk files. This technique is particularly useful in a virtualized environment after you change applications or complete configurations. You must test — and, therefore, simulate — this configuration before putting it in a production environment.

You must manage the service levels of the applications running in a virtualized environment. The actual information delay from disk varies for data held locally, data held on a storage area network (SAN), and data held on network access storage (NAS), and the delay differences may matter. Test different storage options against service levels.

For more information on SANs, see Storage Area Networks For Dummies, 2nd Edition, by Christopher Poelker and Alex Nikitin (Wiley Publishing, Inc.).

In the long run, establish capacity planning to support the likely growth of the resource requirement for any application (or virtual machine).

Provisioning software

Provisioning software enables the manual adjustment of the virtualized environment. Using provisioning software, you can create new virtual machines and modify existing ones to add or reduce resources. This type of provisioning is essential to managing workloads and to moving applications and services from one physical environment to another.

Provisioning software enables management to prioritize actions based on a company’s key performance indicators. It enables the following:

Migration of running virtual machines from one physical server to another

Automatic restart of a failed virtual machine on a separate physical server

Clustering of virtual machines across different physical servers

186 Part IV: Nitty-Gritty Service Management

Managing data center resources is hard under any circumstance — and even harder when those resources are running in virtual partitions. These managed resources need to provide the right level of performance, accountability, and predictability to users, suppliers, and customers. Virtualization must be managed carefully.

Virtualizing storage

Increasingly, organizations also need to virtualize storage. This trend currently works in favor of NASes rather than SANs, because a NAS is less expensive and more flexible than a SAN.

Because the virtualized environment has at least the same requirements as the traditional data center in terms of the actual amount of data stored, managing virtualized storage becomes very important.

In addition to application data, virtual machine images need to be stored. When virtual machines aren’t in use, they’re stored as disk files that can be instantiated at a moment’s notice. Consequently, you need a way to store virtual-machine images centrally.

Hardware provisioning

Before virtualization, hardware provisioning was simply a matter of commissioning new hardware and configuring it to run new applications or possibly repurposing hardware to run some new application.

Virtualization makes this process a little simpler in one way: You don’t have to link the setup of new hardware to the instantiation of a new application. Now you can add a server to the pool and enable it to run virtual machines. Thereafter, those virtual machines are ready as they’re needed. When you add a new application, you simply configure it to run on a virtual machine.

Provisioning is now the act of allocating a virtual machine to a specific server from a central console. Be aware of a catch, however: You can run into trouble if you go too far. You may decide to virtualize entire sets of applications and virtualize the servers that those applications are running on, for example. Although you may get some optimization, you also create too many silos

that are too hard to manage. (For more information on silos, see the nearby sidebar “Static versus dynamic virtualization.”) You may have optimized your environment so much that you have no room to accommodate peak loads.

Chapter 15: Virtualizing the Computing Environment 187

Static versus dynamic virtualization

Virtualization actually is even more complicated. There are two types of virtualization: static and dynamic. Static virtualization is difficult, but the dynamic type is even more so.

In static virtualization, application silos become virtualized application silos. (A silo is an isolated piece of software and hardware that doesn’t have the ability to interact with other components; it’s a world unto itself.) You use virtualization to reduce the number of servers, but the virtualization is done via a fixed pattern that ensures that applications always have sufficient resources to manage peak workloads. This arrangement makes life relatively simple because that virtual machine will stay on the same server. Static virtualization is significantly more efficient than no virtualization, but it doesn’t make optimal use of server resources.

If you want to optimize your environment, you need to be able to allocate server resources

dynamically, based on changing needs within the business. Dynamic virtualization is complex, however. It’s so complex that the market currently doesn’t offer products that can implement it effectively. But those products will be available in time, because the virtualization cat is out of the bag.

Why is dynamic virtualization inevitable? The workloads in the data center are dynamic, especially considering that Internet applications change their transaction rates wildly over time. As the key performance requirements of the environment change, the virtual environment must change to meet those needs. In the long run, envision a world in which the whole network is treated as though it were a single resource space that can be shared dynamically based on changing workloads.

The hypervisor (refer to “Using a hypervisor in virtualization,” earlier in this chapter) lets a physical server run many virtual machines at the same time. In a sense, one server does the work of maybe ten. That arrangement is a neat one, but you may not be able to shift those kinds of workloads without consequences. A server running 20 virtual machines, for example, may still have the same network connection with the same traffic limitation, which could act as a bottleneck. Alternatively, if all those applications use local disks, many of them may need to use a SAN or NAS — and that requirement may have performance implications.

Security issues

Using virtual machines complicates IT security in a big way. Virtualization changes the definition of what a server is, so security is no longer trying to protect a physical server or collection of servers that an application runs on. Instead, it’s protecting virtual machines or collections of virtual machines. Because most data centers support only static virtualization, it isn’t yet well

Соседние файлы в папке Ещё одна посылка от Водяхо