Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Oracle Solaris 11 Advanced Administration Cookbook.docx
Скачиваний:
0
Добавлен:
01.07.2025
Размер:
3.14 Mб
Скачать

Setting the link aggregation

As a rough comparison, we could think about link aggregation (802.3ad LACP) as a network technology layer 2 (Datalink), which acts as the inverse of IPMP (network technology layer 3: IP). While IPMP is concerned with offering network interface fault tolerance—eliminating a single point of failure and offering a higher outbound throughput as a bonus—link aggregation works as the old "trunk" product from previous versions of Oracle Solaris and offers a high throughput for the network traffic and, as a bonus, also provides a fault tolerance feature so that if a network interface fails, the traffic isn't interrupted.

Summarizing the facts:

f IPMP is recommended for fault tolerance, but it offers some output load balance

f Link aggregation is recommended for increasing the throughput, but it also offers fault tolerance

The link aggregation feature puts two or more network interfaces together and administers all of them as a single unit. Basically, link aggregation presents performance advantages, but all links must have the same speed, working in full duplex and point-to-point modes. An example of aggregation is Aggregation_1 | net0, net1, net2, and net3.

At the end, there's only one logic object (Aggregation_1) that was created on the underlying four network interfaces (net0, net1, net2, and net3). These are shown as a single interface, summing the strengths (high throughput, for example) and keeping them hidden. Nonetheless, a question remains: how are the outgoing packets delivered and balanced over the interfaces?

An answer to this question is named Aggregation and Load Balance Policies, which determine the outgoing link by hashing some values (properties) and are enumerated as follows:

f L2 (Networking): In this, the outgoing interface is chosen by hashing the MAC header of each packet.

f L3 (Addressing): In this, the outgoing interface is chosen by hashing the IP header of each packet.

Chapter 3

f L4 (Communication): In this, the outgoing interface is chosen by hashing the UDP and TCP header of each packet. This is the default policy. A very important note is that this policy gives the best performance, but it isn't supported across all systems and it isn't fully 802.3ad-compliant in situations where the switch device can be a restrictive factor. Additionally, if the aggregation scheme is connected to a switch, then the Link Aggregation Control Protocol (LACP) must be supported by the physical switch and aggregation, given that the aggregation can be configured with the following values:

off: This is the default mode for the aggregation

active: This is the mode where the aggregation is configured and where it generates LACP Data Units at regular intervals

passive: This is the mode where the aggregation is configured and only generates LACP Data Units when it receives one from the switch, obliging both sides (the aggregation and switch) to be set up using the passive mode

The only disadvantage of normal link aggregation (known as trunk link aggregation) is that it can't span across multiple switches and is limited to working with only one switch. To overcome this, there's another technique of aggregation that can span over multiple switches named Data Link Multipathing (DLMP) aggregation. To understand DLMP aggregation, imagine

a scenario where we have the following in the same system:

f Zone 1 with vnicA, vnicB, and vnicC virtual interfaces, which are connected to NIC1

f Zone 2 with vnicD and vnicE virtual interfaces, where both of them are connected to NIC2

f NIC1 is connected to Switch1 (SW1)

f NIC2 is connected to Switch2 (SW2) The following is another way of representing this:

f Zone1 | vnicA,vnicB,vnicC | NIC1 | SW1

f Zone 2 | vnicD,vnicE | NIC2 | SW2

Using trunk link aggregation, if the NIC1 network interface went to down, the system could still fail over all traffic to NIC2, and there wouldn't be any problem if both NIC1 and NIC2 were connected to the same switch (this isn't the case).

However, in this case, everything is worse because there are two switches connected to the same system. What would happen if Switch1 had gone down? This could be a big problem because Zone1 would be isolated. Trunk link aggregation doesn't support spanning across switches; therefore, there wouldn't be any possibility of failing over to another switch (Switch2). Concisely, Zone1 would lose network access.

Networking

This is a perfect situation to use DLMP aggregation because it is able to span across multiple switches without requiring any special configuration performed in the switches (this is only necessary when both are in the same broadcast domain). Even if the Switch1 (SW1) port goes to down, Oracle Solaris 11 is able to fail over all the vnicA, vnicB, and vnicC flow from Zone1 to NIC2, which uses a different switch (SW2) port. Briefly, Zone1 doesn't lose access to the network.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]