Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
DQOS Exam Certification Guide - Cisco press.pdf
Скачиваний:
70
Добавлен:
24.05.2014
Размер:
12.7 Mб
Скачать

254 Chapter 4: Congestion Management

As with other queuing tools, none of this work happens if the interface is not congested. When the interface is not congested (in other words, the TX Ring is not full), new packets are placed into the TX Ring directly. When the TX Ring fills, PQ performs queuing. When all the PQ queues drain, as well as the TX Ring drains, congestion has abated. Newly arriving packets would then be placed directly into the TX Ring, until it fills again, which in turn restarts the queuing process with PQ.

PQ works great for QoS policies that need to treat one type of traffic with the absolute best service possible. It has been around since IOS 10.0. However, PQ’s service for the lower queues degrades quickly, making PQ impractical for most applications today. For instance, even running one FTP connection, one web browser, one NetMeeting call, and two VoIP calls when creating the output for this section of the book, the TCP connections for the FTP and HTTP traffic frequently timed out.

Table 4-3 summarizes some of the key functions and features of PQ. For those of you pursuing the QoS exam, look at Appendix B for details of how to configure PQ.

Table 4-3

PQ Functions and Features

 

 

 

 

 

PQ Feature

Explanation

 

 

 

 

Classification

Classifies based on matching an ACL for all Layer 3 protocols, incoming

 

 

interface, packet size, whether the packet is a fragment, and TCP and

 

 

UDP port numbers.

 

 

 

 

Drop policy

Tail drop.

 

 

 

 

Maximum number of

4.

 

queues

 

 

 

 

 

Maximum queue length

Infinite; really means that packets will not be tail dropped, but will be

 

 

queued.

 

 

 

 

Scheduling inside a single

FIFO.

 

queue

 

 

 

 

 

Scheduling among all

Always service higher-priority queues first; result is great service for the

 

queues

High queue, with potential for 100% of link bandwidth. Service

 

 

degrades quickly for lower-priority queues.

 

 

 

Custom Queuing

Custom Queuing (CQ) followed PQ as the next IOS queuing tool added to IOS. CQ addresses the biggest drawback of PQ by providing a queuing tool that does service all queues, even during times of congestion. It has 16 queues available, implying 16 classification categories, which is plenty for most applications. The negative part of CQ, as compared to PQ, is that CQ’s scheduler does not have an option to always service one queue first—like PQ’s High queue— so CQ does not provide great service for delayand jitter-sensitive traffic.

Queuing Tools 255

As with most queuing tools, the most interesting part of the tool is the scheduler. The CQ scheduler gives an approximate percentage of overall link bandwidth to each queue. CQ approximates the bandwidth percentages, as opposed to meeting an exact percentage, due to the simple operation of the CQ scheduler. Figure 4-11 depicts the CQ scheduler logic.

Figure 4-11 CQ Scheduling Logic for Current Queue

Any Packets in

the Current

Queue?

Yes

Counter Equals or Exceeds Byte Count for This Queue?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

No

Repeat This

 

 

 

 

 

 

 

 

Process with

 

 

 

 

 

 

 

 

Next Queue

 

 

 

 

 

 

Yes

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Add Packet

 

 

Move Packet to

 

No

 

 

 

 

 

Length to Counter

 

 

TxRing; Wait for More

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Space in TxRing

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The CQ scheduler performs round-robin service on each queue, beginning with Queue 1. CQ takes packets from the queue, until the total byte count specified for the queue has been met or exceeded. After the queue has been serviced for that many bytes, or the queue does not have any more packets, CQ moves on to the next queue, and repeats the process.

CQ does not configure the exact link bandwidth percentage, but rather it configures the number of bytes taken from each queue during each round-robin pass through the queues. Suppose, for example, that an engineer configures CQ to use five queues. The engineer assigns a byte count of 10000 bytes for each queue. With this configuration, the engineer has reserved 20 percent of the link bandwidth for each queue. (If each queue sends 10000 bytes, a total of 50000 bytes are sent per cycle, so each queue sends 10000/50000 of the bytes out of the interface, or 20 percent.) If instead the engineer has assigned byte counts of 5000 bytes for the first 2 queues, 10000 for the next 2 queues, and 20000 for the fifth queue, the total bytes sent in each pass through the queues would again total 50000 bytes. Therefore, Queues 1 and 2 would get 5000/50000, or 10 percent of the link bandwidth. Queues 3 and 4 would get 10000/50000, or 20 percent of the bandwidth, and Queue 5 would get 20000/50000, or 40 percent. The following formula calculates the implied bandwidth percentage for Queue x:

Byte count for Queue x

--------------------------------------------------------------------------------

Sum of byte counts for all queues

The CQ scheduler essentially guarantees the minimum bandwidth for each queue, while allowing queues to have more bandwidth under the right conditions. Imagine that 5 queues have been configured with the byte counts of 5000, 5000, 10000, 10000, and 20000 for queues 1 through 5, respectively. If all 5 queues have plenty of packets to send, the percentage bandwidth given to each queue is 10 percent, 10 percent, 20 percent, 20 percent, and 40 percent, as described earlier. However, suppose that Queue 4 has no traffic over some short period of time. For that

256 Chapter 4: Congestion Management

period, when the CQ scheduler tries to service Queue 4, it notices that no packets are waiting. The CQ scheduler moves immediately to the next queue. Over this short period of time, only Queues 1 through 3 and Queue 5 have packets waiting. In this case, the queues would receive 12.5 percent, 12.5 percent, 25 percent, 0 percent, and 50 percent of link bandwidth, respectively. (The math to get these percentages is number-of-bytes-per-cycle/40,000 because around 40,000 bytes should be taken from the four active queues per cycle.) Note also that queues that have not been configured are automatically skipped.

Unlike PQ, CQ does not name the queues, but it numbers the queues 1 through 16. No single queue has a better treatment by the scheduler than another, other than the number of bytes serviced for each queue. So, in the example in the last several paragraphs, Queue 5, with 20000 bytes serviced on each turn, might be considered to be the “best” queue with this configuration. Do not be fooled by that assumption! If the traffic classified into Queue 5 comprises 80 percent of the offered traffic, the traffic in Queue 5 may get the worst treatment among all 5 queues. And of course, the traffic patterns will change over short periods of time, and over long periods. Therefore, whereas understanding the scheduler logic is pretty easy, choosing the actual numbers requires some traffic analysis, and good guessing to some degree.

The rest of the details about CQ mirror the features in PQ. Figure 4-12 highlights the features of CQ.

Figure 4-12 CQ Features

1) Classification

-Extended ACL -Multiprotocol -Source Interface -Packet Length -Fragments

-TCP and UDP ports

2)Drop Decision

3)Maximum Number of Queues

4)Maximum Queue Length

5) Scheduling Inside Queue

6) Scheduler Logic

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tail Drop

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Round robin through the

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

16 Queues Max

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

queues, serving packets

 

 

TX Queue/Ring

 

Unlimited Length

 

 

 

 

until each queue’s byte

 

 

 

 

 

 

 

 

 

 

count has been met or

 

 

 

 

.

 

 

 

 

 

 

 

 

.

 

 

 

 

 

exceeded.

 

 

 

.

 

 

 

 

 

 

 

 

 

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

FIFO

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Defaults per Queue:

 

 

 

 

 

Drop Policy: Tail Drop (Only Option)

Queue Sizes: 20

The figure represents the internals of a router, after the routing decision has identified the output interface for the packet. The following list describes each component of the queuing process, with the numbers in the list matching the numbers in the figure:

1CQ can classify packets using access-control lists (ACLs) for most Layer 3 protocols, matching anything allowed by any of the types of ACLs. CQ can also directly match, without using an ACL, the incoming interface, packet length, and TCP and UDP port numbers. CQ and PQ use the exact same classification options.