- •Alexandre Borges
- •Credits
- •Free access for Packt account holders
- •Instant updates on new Packt books
- •Table of Contents Preface 1 Chapter 1: ips and Boot Environments 7
- •Introduction 8
- •Installing a package, verifying its content, and fixing the package corruption 13
- •In an inactive boot environment 64
- •Chapter 2: zfs 73
- •Introduction 74
- •Chapter 3: Networking 159
- •Chapter 4: Zones 235
- •Preface
- •Questions
- •Ips and Boot
- •In this chapter, we will cover the following topics:
- •Introduction
- •Determining the current package publisher
- •Getting ready
- •Installing a package, verifying its content, and fixing the package corruption
- •Getting ready
- •Interface
- •Getting ready
- •Listing and renaming a boot environment
- •Getting ready
- •Configuring an ips local repository
- •Getting ready
- •Configuring a secondary ips local repository
- •Getting ready
- •Publishing packages into a repository
- •Getting ready
- •Adding big applications into a repository
- •Getting ready
- •Creating your own package and publishing it
- •Getting ready
- •Managing an ips publisher on Solaris 11
- •Getting ready
- •Pinning publishers
- •Getting ready
- •Getting ready
- •Creating a mirror repository
- •Getting ready
- •Getting ready
- •Listing and creating a boot environment
- •Getting ready
- •Getting ready
- •Activating a boot environment
- •Getting ready
- •Getting ready
- •References
- •Introduction
- •Creating zfs storage pools and filesystems
- •Getting ready
- •Playing with zfs faults and properties
- •Getting ready
- •Creating a zfs snapshot and clone
- •Getting ready
- •Performing a backup in a zfs filesystem
- •Getting ready
- •Handling logs and caches
- •Getting ready
- •Managing devices in storage pools
- •Getting ready
- •Configuring spare disks
- •Getting ready
- •Handling zfs snapshots and clones
- •Getting ready
- •Playing with comstar
- •Getting ready
- •Mirroring the root pool
- •Getting ready
- •Zfs shadowing
- •Getting ready
- •Configuring zfs sharing with the smb share
- •Getting ready
- •Setting and getting other zfs properties
- •Getting ready
- •Playing with the zfs swap
- •Getting ready
- •References
- •Networking
- •Introduction
- •Playing with Reactive Network Configuration
- •Getting ready
- •Internet Protocol Multipathing
- •Getting ready
- •Setting the link aggregation
- •Getting ready
- •Configuring network bridging
- •Getting ready
- •Getting ready
- •Configuring the dhcp server
- •Getting ready
- •Configuring Integrated Load Balancer
- •Getting ready
- •IPv4 forwarding enabled enabled
- •References
- •Introduction
- •Getting ready
- •Id name status path brand ip
- •Getting ready
- •Value is basic (only the owner can modify it).
- •22:52:06 5849 Rcapd
- •Implementing a flow control
- •Getting ready
- •Getting ready
- •Installing: This may take several minutes...
- •References
- •Playing with Oracle
- •Introduction
- •Reviewing smf operations
- •Getting ready
- •Its dependencies
- •Handling manifests and profiles
- •Getting ready
- •Creating smf services
- •Getting ready
- •Getting ready
- •Troubleshooting Oracle Solaris 11 services
- •Getting ready
- •References
- •Configuring and
- •Introduction
- •Getting ready
- •References
- •Configuring and
- •Introduction
- •Configuring and using rbac
- •Getting ready
- •Playing with least privileges
- •Getting ready
- •References
- •Administering and
- •Introduction
- •Monitoring and handling process execution
- •Getting ready
- •2014 May 4 19:25:10, load average: 0.38, 0.30, 0.28 syscalls: 12648
- •Managing processes' priority on Solaris 11
- •Getting ready
- •Configuring fss and applying it to projects
- •Getting ready
- •References
- •Isbn-13: 978-0131568198
- •Configuring the
- •Introduction
- •Configuring the syslog
- •Getting ready
- •Getting ready
- •References
- •Aggregation 191
- •Identifier (fmri) 294 faults, zfs
- •(Trill) 198
- •Values, l4 (Communication)
- •Virtual ip address (vip address) 228 virtual memory size (vsz) 396 virtual network interface (vnic) 238 virtual network, zone
- •Virtual to virtual (v2v) migration 280
- •About Packt Enterprise
- •Writing for Packt
Setting the link aggregation
As a rough comparison, we could think about link aggregation (802.3ad LACP) as a network technology layer 2 (Datalink), which acts as the inverse of IPMP (network technology layer 3: IP). While IPMP is concerned with offering network interface fault tolerance—eliminating a single point of failure and offering a higher outbound throughput as a bonus—link aggregation works as the old "trunk" product from previous versions of Oracle Solaris and offers a high throughput for the network traffic and, as a bonus, also provides a fault tolerance feature so that if a network interface fails, the traffic isn't interrupted.
Summarizing the facts:
f IPMP is recommended for fault tolerance, but it offers some output load balance
f Link aggregation is recommended for increasing the throughput, but it also offers fault tolerance
The link aggregation feature puts two or more network interfaces together and administers all of them as a single unit. Basically, link aggregation presents performance advantages, but all links must have the same speed, working in full duplex and point-to-point modes. An example of aggregation is Aggregation_1 | net0, net1, net2, and net3.
At the end, there's only one logic object (Aggregation_1) that was created on the underlying four network interfaces (net0, net1, net2, and net3). These are shown as a single interface, summing the strengths (high throughput, for example) and keeping them hidden. Nonetheless, a question remains: how are the outgoing packets delivered and balanced over the interfaces?
An answer to this question is named Aggregation and Load Balance Policies, which determine the outgoing link by hashing some values (properties) and are enumerated as follows:
f L2 (Networking): In this, the outgoing interface is chosen by hashing the MAC header of each packet.
f L3 (Addressing): In this, the outgoing interface is chosen by hashing the IP header of each packet.
Chapter 3
f L4 (Communication): In this, the outgoing interface is chosen by hashing the UDP and TCP header of each packet. This is the default policy. A very important note is that this policy gives the best performance, but it isn't supported across all systems and it isn't fully 802.3ad-compliant in situations where the switch device can be a restrictive factor. Additionally, if the aggregation scheme is connected to a switch, then the Link Aggregation Control Protocol (LACP) must be supported by the physical switch and aggregation, given that the aggregation can be configured with the following values:
off: This is the default mode for the aggregation
active: This is the mode where the aggregation is configured and where it generates LACP Data Units at regular intervals
passive: This is the mode where the aggregation is configured and only generates LACP Data Units when it receives one from the switch, obliging both sides (the aggregation and switch) to be set up using the passive mode
The only disadvantage of normal link aggregation (known as trunk link aggregation) is that it can't span across multiple switches and is limited to working with only one switch. To overcome this, there's another technique of aggregation that can span over multiple switches named Data Link Multipathing (DLMP) aggregation. To understand DLMP aggregation, imagine
a scenario where we have the following in the same system:
f Zone 1 with vnicA, vnicB, and vnicC virtual interfaces, which are connected to NIC1
f Zone 2 with vnicD and vnicE virtual interfaces, where both of them are connected to NIC2
f NIC1 is connected to Switch1 (SW1)
f NIC2 is connected to Switch2 (SW2) The following is another way of representing this:
f Zone1 | vnicA,vnicB,vnicC | NIC1 | SW1
f Zone 2 | vnicD,vnicE | NIC2 | SW2
Using trunk link aggregation, if the NIC1 network interface went to down, the system could still fail over all traffic to NIC2, and there wouldn't be any problem if both NIC1 and NIC2 were connected to the same switch (this isn't the case).
However, in this case, everything is worse because there are two switches connected to the same system. What would happen if Switch1 had gone down? This could be a big problem because Zone1 would be isolated. Trunk link aggregation doesn't support spanning across switches; therefore, there wouldn't be any possibility of failing over to another switch (Switch2). Concisely, Zone1 would lose network access.
Networking
This is a perfect situation to use DLMP aggregation because it is able to span across multiple switches without requiring any special configuration performed in the switches (this is only necessary when both are in the same broadcast domain). Even if the Switch1 (SW1) port goes to down, Oracle Solaris 11 is able to fail over all the vnicA, vnicB, and vnicC flow from Zone1 to NIC2, which uses a different switch (SW2) port. Briefly, Zone1 doesn't lose access to the network.
