Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Designing and Developing Scalable IP Networks.pdf
Скачиваний:
36
Добавлен:
15.03.2015
Размер:
2.95 Mб
Скачать

124

CLASS OF SERVICE AND QUALITY OF SERVICE

Most vendors provide mechanisms for the network operator to configure steps or levels at which dropping occurs and the proportion of packets dropped at each level. RED, by default, treats all flows as equal. Thus, packets in your high-priority classes are just as likely to be dropped as those in your low priority classes. Clearly, this is not optimal if a flow is particularly intolerant to packet loss. Therefore, it is necessary to apply different drop profiles to each class of traffic. Generally, two profiles are sufficient (normal and aggressive drop profiles) but there is nothing to prevent each class having a unique drop profile. The main constraint is complexity.

RED profiles are based on a set of mappings of percentage throughput to a drop probability. These create stepped profiles as in Figure 9.3.

1

aggressive

Drop probability

normal

Percentage throughput

100

Figure 9.3 Stepped weighted RED parameters

9.2.4 QUEUEING STRATEGIES

Queueing is the basis of the implementation of required egress behaviours. Different vendors provide widely differing numbers of queues on their devices, some using softwarebased queues (less common today) and others using hardware-based queues.

The primary basis of providing preferential treatment to particular packets is to service different queues at different rates. For example, going back to the earlier example of four services, gold, silver, bronze, and best effort, it might be that the gold queue is served 4 times, silver served 3 times, bronze served twice and best efforts served once (a total of 10 units of service) (see Figure 9.4). This is a somewhat simple example but it demonstrates the point.

9.2.4.1 Fair Queueing

Fair queueing is a mechanism used to provide a somewhat fairer division of transmission resources. Fair queueing ensures that bandwidth is shared equally among the queues on an

9.2 CoS/QoS FUNCTIONAL ELEMENTS

125

Gold

Silver

Bronze

Best Effort

Figure 9.4 Weighted queueing schedulers

approximately per-bit basis, thus ensuring that no queue becomes starved of bandwidth. Each queue is serviced in turn with empty queues receiving no service. Their service is shared equally among the queues with packets to be transmitted. Fair queueing also ensures that low bandwidth flows and high bandwidth flows receive a consistent response and that no flow can consume more than its fair share of output capacity.

FQ does have some limitations, some of which are implementation specific and some are associated with the algorithm itself:

FQ assumes that all packets are the same size. It is much less effective when packets are of significantly differing sizes. Flows of larger packets are favoured over flows of smaller packets.

FQ is implicitly fair, i.e. all flows are treated equally. There is no mechanism to favour a high priority flow over a low priority flow.

FQ is sensitive to the order in which packets arrive. A packet arriving in an empty queue immediately after the queue has just been skipped will not be transmitted until that queue is considered again for service. Depending upon the number of queues, this can introduce a significant delay.

FQ has often been implemented as a software feature. On such devices, this inherently limits the speed at which it can be operated so it is constrained to lower speed interfaces.

9.2.4.2 Weighted Fair Queueing (WFQ)

Despite its name, WFQ is not a direct derivative of FQ but was developed independently. Looking back at Figure 9.4, the gold queue is given 4 credits. It may transmit packets

126

CLASS OF SERVICE AND QUALITY OF SERVICE

until its credits are all gone or until it does not have enough credits to transmit a whole packet. So, it can transmit the 1 unit packet but, even though it has 3 remaining credits, it cannot transmit the 4 unit packet. Therefore, despite the fact that it is provided with 4 times as many credits as the best efforts queue, it actually ends up transmitting exactly the same amount of data in the first round.

WFQ overcomes some of the limitations of pure FQ:

WFQ provides a mechanism to favour high-priority flows over low-priority flows. Those flows are put into queues that are serviced proportionately to the level of their priority.

WFQ supports fairness between flows with variable packet lengths. Supporting fairness in these circumstances significantly increases the complexity.

However, WFQ still suffers from some of the limitations of pure FQ and suffers from some of its own making:

Due to its computational complexity, WFQ is difficult to apply to high-bandwidth flows. This limits its usefulness to low-bandwidth interfaces. This is further exacerbated by the fact that high-speed interfaces introduce such a low serialization delay that the benefits of WFQ are too small (verging on insignificant) on high-speed interfaces.

WFQ is generally applied to a small number of queues associated with each interface into which vast numbers of flows are placed. The theoretical benefits of WFQ in preventing the misbehaviour of one flow from affecting other flows are not provided to other flows within the same queue.

WFQ is still implemented in software. This places constraints upon the scalability of WFQ within a routing device.

On Cisco routers, WFQ is enabled by default on low-speed (2 Mbps and lower) interfaces.

9.2.4.3 Round Robin

This is simply the process of servicing each of the queues in turn. Each queue gets an equal level of service so, in that respect, pure round robin is of very limited value to a network operator trying to provide differentiated QoS. In its purest form, the queues are serviced in a bit-wise fashion (i.e. the first bit is transmitted from each queue in turn). This is the closest it is possible to get to the ideal theoretical scheduling algorithm called Generalized Packet Scheduler (GPS).

9.2.4.4 Weighted Round Robin (WRR)

Weighted round robin provides the means to apply a weight to the rate at which each queue is serviced. This is the simplest mechanism for providing differentiated QoS. As with round robin, in its purest form, weighted round robin is difficult to implement because