Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
vsp_41_esx_server_config.pdf
Скачиваний:
10
Добавлен:
06.02.2016
Размер:
2.67 Mб
Скачать

Chapter 9 Managing Storage

nPerforms array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.

VMware PSPs

Path Selection Plug-Ins (PSPs) run with the VMware NMP and are responsible for choosing a physical path for I/O requests.

The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical paths for that device. You can override the default PSP.

By default, the VMware NMP supports the following PSPs:

Most Recently Used

Selects the path the ESX host used most recently to access the given device. If

(VMW_PSP_MRU)

this path becomes unavailable, the host switches to an alternative path and

 

 

continues to use the new path while it is available. MRU is the default path

 

 

policy for active-passive arrays.

Fixed

Uses the designated preferred path, if it has been configured. Otherwise, it uses

(VMW_PSP_FIXED)

the first working path discovered at system boot time. If the host cannot use

 

 

the preferred path, it selects a random alternative available path. The host

 

 

reverts back to the preferred path as soon as that path becomes available. Fixed

 

 

is the default path policy for active-active arrays.

 

 

 

 

 

CAUTION If used with active-passive arrays, the Fixed path policy might cause

 

 

path thrashing.

 

 

 

VMW_PSP_FIXED_AP

Extends the Fixed functionality to active-passive and ALUA mode arrays.

Round Robin

Uses a path selection algorithm that rotates through all available active paths

(VMW_PSP_RR)

enabling load balancing across the paths.

VMware NMP Flow of I/O

When a virtual machine issues an I/O request to a storage device managed by the NMP, the following process takes place.

1The NMP calls the PSP assigned to this storage device.

2 The PSP selects an appropriate physical path on which to issue the I/O. 3 The NMP issues the I/O request on the path selected by the PSP.

4If the I/O operation is successful, the NMP reports its completion.

5If the I/O operation reports an error, the NMP calls the appropriate SATP.

6 The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths.

7The PSP is called to select a new path on which to issue the I/O.

Multipathing with Local Storage and Fibre Channel SANs

In a simple multipathing local storage topology, you can use one ESX host, which has two HBAs. The ESX host connects to a dual-port local storage system through two cables. This configuration ensures fault tolerance if one of the connection elements between the ESX host and the local storage system fails.

To support path switching with FC SAN, the ESX host typically has two or more HBAs available from which the storage array can be reached using one or more switches. Alternatively, the setup can include one HBA and two storage processors so that the HBA can use a different path to reach the disk array.

VMware, Inc.

119

ESX Configuration Guide

In Figure 9-2, multiple paths connect each server with the storage device. For example, if HBA1 or the link between HBA1 and the switch fails, HBA2 takes over and provides the connection between the server and the switch. The process of one HBA taking over for another is called HBA failover.

Figure 9-2. Fibre Channel Multipathing

Host

Host

1

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

HBA2

HBA1

HBA3

HBA4

switch

 

 

switch

 

SP1

 

SP2

storage array

Similarly, if SP1 or the link between SP1 and the switch breaks, SP2 takes over and provides the connection between the switch and the storage device. This process is called SP failover. ESX supports HBA and SP failover with its multipathing capability.

Multipathing with iSCSI SAN

With iSCSI storage, you can take advantage of the multipathing support that the IP network offers. In addition, ESX supports host-based multipathing for all types of iSCSI initiators.

ESX can use multipathing support built into the IP network, which allows the network to perform routing. Through dynamic discovery, iSCSI initiators obtain a list of target addresses that the initiators can use as multiple paths to iSCSI LUNs for failover purposes.

ESX also supports host-based multipathing.

Figure 9-3 shows multipathing setups possible with different types of iSCSI initiators.

120

VMware, Inc.

Chapter 9 Managing Storage

Figure 9-3. Host-Based Multipathing

 

 

hardware

 

 

 

software

 

 

 

iSCSI

 

 

 

iSCSI

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

host 1

 

 

 

host 2

 

 

 

 

 

 

 

 

 

software

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

adapter

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

HBA2

 

 

 

 

 

HBA1

 

 

NIC2

 

 

 

 

 

NIC1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

IP network

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

iSCSI storage

Multipathing with Hardware iSCSI

With the hardware iSCSI, the host typically has two or more hardware iSCSI adapters available, from which the storage system can be reached using one or more switches. Alternatively, the setup might include one adapter and two storage processors so that the adapter can use a different path to reach the storage system.

In the Figure 9-3 illustration, Host1 has two hardware iSCSI adapters, HBA1 and HBA2, that provide two physical paths to the storage system. Multipathing plug-ins on your host, whether the VMkernel NMP or any third-party MPPs, have access to the paths by default and can monitor health of each physical path. If, for example, HBA1 or the link between HBA1 and the network fails, the multipathing plug-ins can switch the path over to HBA2.

Multipathing with Software iSCSI

With the software iSCSI, as shown on Host 2 of Figure 9-3, you can use multiple NICs that provide failover and load balancing capabilities for iSCSI connections between your host and storage systems.

For this setup, because multipathing plug-ins do not have direct access to physical NICs on your host, you must connect each physical NIC to a separate VMkernel port. You then associate all VMkernel ports with the software iSCSI initiator using a port binding technique. As a result, each VMkernel port connected to a separate NIC becomes a different path that the iSCSI storage stack and its storage-aware multipathing plug-ins can use.

For information about how to configure multipathing for the software iSCSI, see “Networking Configuration for Software iSCSI and Dependent Hardware iSCSI,” on page 68.

Path Scanning and Claiming

When you start your ESX host or rescan your storage adapter, the host discovers all physical paths to storage devices available to the host. Based on a set of claim rules defined in the /etc/vmware/esx.conf file, the host determines which multipathing plug-in (MPP) should claim the paths to a particular device and become responsible for managing the multipathing support for the device.

By default, the host performs a periodic path evaluation every 5 minutes causing any unclaimed paths to be claimed by the appropriate MPP.

VMware, Inc.

121

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]