Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Advanced Wireless Networks - 4G Technologies

.pdf
Скачиваний:
60
Добавлен:
17.08.2013
Размер:
21.94 Mб
Скачать

288 ADAPTIVE TCP LAYER

message, it puts the TCP sender into persist mode and itself enters the disconnected state. TCP periodically generates probe packets while in persist mode. When, eventually, the receiver is connected to the sender, it responds to these probe packets with a duplicate ACK (or a data packet). This removes TCP from persist mode and moves ATCP back into normal state. In order to ensure that TCP does not continue using the old CWND value, ATCP sets TCP’s CWND to one segment at the time it puts TCP in persist state. The reason for doing this is to force TCP to probe the correct value of CWND to use for the new route.

9.5.5.4 Other transitions

When ATCP is in the loss state, reception of an ECN or an ICMP source quench message will move ATCP into congested state and ATCP removes TCP from its persist state. Similarly, reception of an ICMP destination Unreachable message moves ATCP from either the loss state or the congested state into the disconnected state and ATCP moves TCP into persist mode (if it was not already in that state).

9.5.5.5 Effect of lost messages

Note that due to the lossy environment, it is possible that an ECN may not arrive at the sender or, similarly, a destination unreachable message may be lost. If an ECN message is lost, the TCP sender will continue transmitting packets. However, every subsequent ACK will contain the ECN, thus ensuring that the sender will eventually receive the ECN, causing it to enter the congestion control state as it is supposed to. Likewise, if there is no route to the destination, the sender will eventually receive a retransmission of the destination unreachable message, causing TCP to be put into the persist state by ATCP. Thus, in all cases of lost messages, ATCP performs correctly. Further details on protocol implementation can be found in Liu and Singh [26].

 

2000

ATCP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1800

TCP

 

 

 

 

 

 

 

Filefilesize=1MB

 

 

 

 

 

 

 

1600

 

 

 

 

 

 

 

(s)

1400

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

time

1200

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Transfer

1000

 

 

 

 

 

 

 

800

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

600

 

 

 

 

 

 

 

 

400

 

 

 

 

 

 

 

 

200

 

 

 

 

 

 

 

 

10

15

20

25

30

35

40

45

Hop to hop delay (ms)

Figure 9.13 ATCP and TCP performance in the presence of bit error only.

TCP FOR MOBILE AD HOC NETWORKS

289

9.5.6 Performance examples

An experimental testbed, for the evaluation of ATCP protocol, consisting of five Pentium PCs, each of which with two ethernet cards is described in Liu and Singh [26]. This gives a four-hop network where the traffic in each hop is isolated from the other hops. To model the lossy and low-bandwidth nature of the wireless links, in IP, a 32 kb/s channel is emulated over each hop. Performance examples are shown in Figures 9.13–16(a). A uniformly better performance of ATCP over standard TCP protocols is evident from all these examples.

 

3500

 

 

 

 

 

 

 

 

 

 

 

ATCP

 

 

 

 

 

 

 

3000

 

TCP

 

 

 

 

 

 

 

Filefile size==11MB,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ECN

 

 

 

 

 

 

 

 

2500

 

 

 

 

 

 

 

 

(s)

 

 

 

 

 

 

 

 

 

time

2000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Transfer

1500

 

 

 

 

 

 

 

 

1000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

500

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

10

15

20

25

30

35

40

45

Hop to hop delay (ms)

Figure 9.14 ATCP and TCP performance in the presence of bit error and congestion.

 

3000

 

 

 

 

 

 

 

 

 

 

 

ATCP

 

 

 

 

 

 

 

 

 

TCP

 

 

 

 

 

 

 

2500

fileFilesize==11MB,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Slow Startstart

 

 

 

 

 

 

time (s)

2000

 

 

 

 

 

 

 

 

1500

 

 

 

 

 

 

 

 

Transfer

 

 

 

 

 

 

 

 

1000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

500

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

10

15

20

25

30

35

40

45

Hop to hop delay (ms)

Figure 9.15 ATCP and TCP performance in the presence of bit error and partition.

 

750

 

 

 

 

 

 

 

 

 

 

 

ATCP

 

 

 

 

 

 

 

700

 

TCP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

650

fileFilesize==200KkByte,

 

 

 

 

 

 

Slow Ststart

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(s)

600

 

 

 

 

 

 

 

 

time

 

 

 

 

 

 

 

 

550

 

 

 

 

 

 

 

 

Transfer

 

 

 

 

 

 

 

 

500

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

450

 

 

 

 

 

 

 

 

 

400

 

 

 

 

 

 

 

 

 

350

 

 

 

 

 

 

 

 

 

 

10

15

20

25

30

35

40

45

Hop to hop delay (ms)

Figure 9.16 ATCP and TCP performance in the presence of bit error and larger partition.

 

2000

 

 

 

 

 

 

 

 

 

ATCP

 

 

 

 

 

 

 

1800

TCP

 

 

 

 

 

 

 

fileFilesize=11MByte

 

 

 

 

 

 

1600

 

 

 

 

 

 

 

time (s)

1400

 

 

 

 

 

 

 

1200

 

 

 

 

 

 

 

Tansfer

1000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

800

 

 

 

 

 

 

 

 

600

 

 

 

 

 

 

 

 

400

 

 

 

 

 

 

 

 

10

15

20

25

30

35

40

45

Hop to hop delay (ms)

Figure 9.17 ATCP and TCP performance in the presence of bit error and packet reordering.

 

 

λ = 8

 

λ = 16

 

2000

 

 

 

 

 

 

 

1500

 

 

 

(s)

 

 

 

 

Time

1000

 

 

 

 

 

 

 

 

500

 

 

 

 

0

 

 

 

 

TCP

ATCP

TCP

ATCP

Figure 9.18 TCP and ATCP transfer time for 1 MB data in the general case.

REFERENCES 291

REFERENCES

[1]T. Henderson and R.H. Katz, Transport protocols for Internet-compatible satellite networks, IEEE JSAC, vol. 17, no. 2, 1999, pp. 326–344.

[2]T.V. Lakshman and U. Madhow, The performance of TCP/IP for networks with high bandwidth–delay products and random loss, IEEE/ACM Trans. Networking, 1997, vol. 5, no. 3, pp. 336–350.

[3]S. Floyd and V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Trans. Networking, vol. 1, no. 4, 1993, pp. 397–413.

[4]L. Brakmo and L. Peterson, TCP Vegas: end to end congestion avoidance on a global internet, IEEE JSAC, vol. 13, no. 8, 1995, pp. 1465–1480.

[5]C. Barakat and E. Altman, Analysis of TCP with several bottleneck nodes, IEEE GLOBECOM, 1999, pp. 1709–1713.

[6]W.R. Stevens, TCP/IP Illustrated, vol. 1. Reading, MA: Addison-Wesley, 1994.

[7]R. Caceres and L. Iftode, Improving the performance of reliable transport protocols in mobile computing environments, IEEE J. Select. Areas Commun., vol. 13, 1995

pp.850–857.

[8]A. Desimone, M.C. Chuah and O.C. Yue, Throughput performance of transport layer protocols over wireless LAN’s, in Proc. IEEE GLOBECOM’93, December 1993,

pp.542–549.

[9]K. Fall and S. Floyd, Comparisons of Tahoe, Reno, and Sack TCP [Online], March 1996. Available FTP: ftp://ftp.ee.lbl.gov

[10]V.K. Joysula, R.B. Bunt and J.J. Harms, Measurements of TCP performance over wireless connections, in Proc. WIRELESS’96, 8th Int. Conf. Wireless Communications, Calgary, 8–10 July 1996, pp. 289–299.

[11]T.V. Lakshman and U. Madhow, The performance of TCP/IP for networks with high bandwidth–delay products and random loss, IEEE Trans. Networking, vol. 3, 1997,

pp.336–350.

[12]P.P. Mishra, D. Sanghi and S.K. Tripathi, TCP flow control in lossy networks: Analysis and enhancements, in Computer Networks, Architecture and Applications, IFIP Transactions C-13, S.V. Raghavan, G.V. Bochmann and G. Pujolle (eds). Amsterdam: Elsevier North-Holland, 1993, pp. 181–193.

[13]A. Kumar, Comparative performance analysis of versions of TCP in a local network with a lossy link, IEEE/ACM Trans. Networking, vol. 6, no. 4, 1998, pp. 485–498

[14]A. Bakre and B.R. Badrinath, I-TCP: indirect TCP for mobile hosts, in Proc. 15th Int. Con. on Distributed Computing Systems, Vancouver, June 1995, pp. 136–143.

[15]A. Bakre and B.R. Badrinath, Indirect transport layer protocols for mobile wireless environment, in Mobile Computing, T. Imielinski and H.F. Korth (eds). Kluwer Academic: Norwell, MA 1996, pp. 229–252.

[16]H. Balakrishnan, S. Seshan and Randy Katz, Improving reliable transport and handoff performance in cellular wireless networks, Wireless Networks, vol. 1, no. 4, 1995.

[17]H. Balakrishnan, V.N. Padmanabhan, S. Seshan and R. Katz, A comparison of mechanisms for improving TCP performance over wireless links, in ACM SIGCOMM’96, Palo Alto, CA, August 1996, pp. 256–269.

[18]B.R. Badrinath and T. Imielinski, Location management for networks with mobile users, in Mobile Computing, T. Imielinski and H.F. Korth (eds). Kluwer Academic: Norwell, MA 1996, pp. 129–152.

292 ADAPTIVE TCP LAYER

[19]K. Brown and S. Singh, A network architecture for mobile computing, in Proc. IEEE INFOCOMM’96, March 1996, pp. 1388–1396.

[20]K. Brown and S. Singh, M-UDP: UDP for mobile networks, ACM Comput. Commun. Rev., vol. 26, no. 5, 1996, pp. 60–78.

[21]R. Caceres and L. Iftode, Improving the performance of reliable transport protocols in mobile computing environments, IEEE Selected Areas in Commun., vol. 13, no. 5, 1994, pp. 850–857.

[22]R. Ghai and S. Singh, An architecture and communication protocol for picocellular networks, IEEE Personal Commun. Mag., no. 3, 1994, pp. 36–46.

[23]K. Seal and S. Singh, Loss profiles: a quality of service measure in mobile computing, J. Wireless Networks, vol. 2, 1996, pp. 45–61.

[24]S. Singh, Quality of service guarantees in mobile computing, J. Comput. Commun., vol. 19, no. 4, 1996, pp. 359–371.

[25]S. Floyd and V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Trans. Networking, vol. 1, no. 4, 1993, pp. 397–413.

[26]J. Liu and S. Singh, ATCP: TCP for mobile ad hoc networks, IEEE J. Selected Areas Commun., vol. 19, no. 7, 2001, pp. 1300–1315.

[27]http://www-nrg.ee.lbl.gov/ecn-arch/

[28]G. Holland and N. Vaidya, Analysis of TCP performance over mobile ad hoc networks, in Proc. ACMMobile Communications Conf., Seattle, WA, 15–20 August 1999, pp. 219–230.

[29]D.B. Johnson and D.A. Maltz, Dynamic source routing in ad hoc wireless networks, in Mobile Computing, T. Imielinski and H. F. Korth (eds). Norwell, MA: Kluwer Academic, 1996, pp. 153–191.

[30]K. Chandran, S. Raghunathan, S. Venkatesan and R. Prakash, A feedback-based scheme for improving TCP performance in ad hoc wireless networks, in Proc. 18th Int. Conf. Distributed Computing Systems, Amsterdam, 26–29 May 1998, pp. 474–479.

10

Crosslayer Optimization

10.1 INTRODUCTION

In many algorithms throughout the book the information from physical layer was used to adjust the parameters of the higher layer protocols in the network. In this section we will additionally explicitly discuss the gains that can be achieved by means of a cross-layer approach, where a certain layer of information is passed to the higher layers in order to optimize the system performance [1–49]. In Chapter 9 we discussed the operation of TCP and indicated the importance of being able to distinguish packet losses due to network congestion from those due to degradation of physical channel parameters. The reaction of TCP on the losses due to congestion will be to change the congestion window size and reduce the transmission packet rate, whereas for physical channel degradation simple retransmission will suffice.

Routers in the network indicate congestion by dropping packets, which in turn causes the source to adaptively decrease its sending rate. The ECN mechanism used to notify the receiver whenever congestion occurs in the network was also discussed in the previous chapter. This mechanism works in the following manner: included in a TCP packet’s header is the ECN bit which is set to zero by the source. If the router detects congestion, it will set the ECN bit to one, and the packet is said to be marked. The marked packet eventually reaches the destination, which in turn informs the source about the value of the mark (i.e. the ECN bit value). The source adapts its transmission rate depending on the value of the mark.

The current deployment of the TCP protocol interprets all losses as being congestionrelated. Whenever losses occur over a wireless channel, the TCP source reacts to this as though it was due to congestion and thus decreases the packet transmission rate, causing loss in network throughput. A solution that has been proposed to mitigate this problem is to ‘smooth’ the channel by suitable coding and link-layer ARQ at a faster timescale than that of the TCP control loop so that the wireless link ideally is perceived as a constant channel,

Advanced Wireless Networks: 4G Technologies Savo G. Glisic

C 2006 John Wiley & Sons, Ltd.

294 CROSSLAYER OPTIMIZATION

but with lower capacity. However, in practice, there is still the problem that the TCP sender may not be fully shielded from wireless link losses. This can lead to the TCP congestion control mechanism reacting to packet losses, thus resulting in redundant retransmissions and loss of throughput. However, once ECN-enabled TCP is deployed, where the ECN bit can be used to mark packets to indicate congestion, there is a means of differentiating between congestion-related loss and wireless channel-related loss.

Thus, the channel need not be smoothed because the ECN mechanism provides a means of explicitly indicating congestion. In Chapter 9 it was demonstrated that, in a single user environment, if packets are marked based solely on congestion information, there is no significant degradation of TCP performance due to the time-varying nature of the wireless channel as compared with wired networks. Such an approach is an example of where a crosslayer view of physical layer information (channel conditions) is used at the network layer to significantly improve network-layer throughput performance.

As a second example consider a conventional cellular system with a fixed base station and a number of mobile users. Data flows (packets) arrive from the core network to the cell base station and are destined for the mobile users, with the packets for each user being queued temporarily at the base station (a separate queue is maintained for each user). The objective of the base station is to schedule these packets to various mobiles in a timely manner. Owing to the asymmetric nature of the traffic we can restrict ourselves to just considering the forward link problem, where the direction of the data flow is from the base station to the mobile user. An efficient way to use the cellular spectrum is to dedicate some bandwidth for data transport and allocate this bandwidth in a dynamic manner among various data users. For instance, time could be divided into fixed-size time slots and users allocated time slots dynamically. The simplest scheduling algorithm is the round-robin mechanism, where users are periodically allocated slots irrespective of whether there is data to be sent to a particular mobile user. Note that this simple TDM scheme still leads to some wasted bandwidth during inactivity. Furthermore, it suffers from a more subtle problem: this approach is independent of the channel state (a channel-state-independent scheduling mechanism).

A channel-state-dependent scheduling algorithm can provide significant gains. As an illustration consider a wireless system consisting of three users as in Figure 10.1. For this example, we consider access time to be slotted, and the channels to be constant over a time slot. For demonstration, let us assume that the channels are either ON or OFF, equally likely, and the channels are independent of each user. Thus, in this system, there are eight possible (instantaneous) channel states for the three independent users, ranging from (ON, ON, ON) to (OFF, OFF, OFF). When a user’s channel is ON, one packet can be transmitted successfully to the mobile user during the time slot. The system is assumed to be a TDM system; thus, the base station can transmit to only one user on each time slot. The associated scheduling problem is to decide which user is allowed access to the channel during each particular time slot. A naive scheduling rule would be to employ the well-known roundrobin mechanism. In such a scheme, the users are periodically given access to the channel, with each user getting one-third of the slots over time. As the channel of each user is equally likely to be ON or OFF in each time slot, it follows that, over time, on average, each user will get a data rate of (1/2)(1/3) = 1/6 packets/slot.

On the other hand, suppose the base station uses knowledge of the instantaneous channel state. Then a simple policy would be to schedule and transmit to a user whose channel is in the ON state. If more than one user’s channel is in the ON state, the scheduler could pick a user randomly (equally likely) among those users whose channels are ON, and send

INTRODUCTION 295

User 1

User 2

User 3

X1(t)

X2(t)

X3(t)

on/off

on/off

on/off

Scheduler

 

Q1(t)

 

Q2(t)

 

Q3(t)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data from the Internet

Figure 10.1 Channel state-dependent scheduling.

data to the selected user. This simple example assumes that all users have identical traffic demands. For the case where some users have greater needs than others, high-demand users would be assigned channel access with greater likelihood. In this case data is not sent by the base station if and only if all users’ channels are OFF (which occurs on average only one-eighth of the time). Thus, the total data rate achieved by this state-dependent rule is 1 18 = 78 packets/slot. As this rule is symmetric across users, it follows that on average, the data rate per user is (1/3)(7/8) = 7/24 packets/slot, which is almost twice the throughput as the round-robin scheme that provided 16 packets/slot. In addition, the base station does not radiate power during bad channel conditions, thereby decreasing interference levels in the wireless network.

This gain achieved due to channel-state-dependent scheduling is called the multi-user diversity gain. This example illustrates the significance of multi-user diversity gain in network scheduling. However, a central question is how to design online algorithms that achieve this gain, while also supporting diverse QoS requirements for various users for realistic channel scenarios. While this question is not yet completely answered, there has been extensive research on various aspects of the problem [2].

296 CROSSLAYER OPTIMIZATION

10.2 A CROSS-LAYER ARCHITECTURE FOR VIDEO DELIVERY

With time-varying wireless link quality, providing QoS for video applications in the form of absolute guarantee [51, 52] may not be feasible. Thus, it is more reasonable to provide QoS in the form of soft (or ‘elastic’) guarantee, which allows QoS parameters in the priority transmission system to be adjusted along with changing channel conditions. The relative QoS differentiation discussed in References [52–54] is one of the possible solutions for 4G adaptive QoS system. Similarly, on the application layer, it is desirable to have a video bitstream that is adaptive to changing channel conditions. Among several possible approaches for video quality adaptation [55, 56], a scalable video [50] seems to be attractive due to its low complexity and high flexibility in rate adaptation.

Figure 10.2 shows a cross-layer QoS mapping architecture for video delivery over singlehop wireless networks. This architecture represents an end-to-end delivery system for a video source from the sender to the receiver, which includes source video encoding module, cross-layer QoS mapping and adaptation module, link-layer packet transmission module, wireless channel (time-varying and non-stationary), adaptive wireless channel modeling module, and video decoder/output at the receiver. In this context, the wireless channel is modeled at the link layer (instead of the physical layer) since the link layer modeling is more amenable for analysis and simulations of the QoS provisioning system (e.g. delay bound or packet loss rate). For these details see Chapter 8 and the discussion on effective channel capacity. In the link-layer transmission control module in the architecture, a class-based buffering and scheduling mechanism is employed to achieve differentiated services. In particular, K QoS priority classes are maintained with each class of traffic being maintained in separate buffers. A strict (nonpre-emptive) priority scheduling policy is employed to serve

Transmission QoS

provisioning

Video QoS

requirement

Transmission

control

1

 

2

 

K

 

substreamVideo

 

substreamVideo

 

substreamVideo

 

QoS(1)

QoS(2)

QoS(K)

 

 

 

 

 

 

QoS mapping and

QoS adaptation

Video encoder

Video

input

Transmission

module

Channel

feedback Time-varying and non-stationary

wireless channel

Adaptive channel modeling

Video decoder

Video module

Video output

Figure 10.2 A block diagram of a cross-layer QoS management architecture for video delivery over a wireless channel.

 

 

A CROSS-LAYER ARCHITECTURE FOR VIDEO DELIVERY

297

Video frame 1

 

Video frame 2

 

Video frame 3

 

Video frame N

 

 

 

 

 

 

 

 

 

 

 

I

 

P

 

P

 

P

Base layer

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

First enhancement

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

layer

 

 

 

 

 

 

 

 

Second enhancement

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

layer

 

 

 

 

 

 

 

 

Third enhancement

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

layer

 

 

 

 

 

 

 

 

 

 

Tp

 

Tp + 1/F

 

Tp + 2/F

 

Tp + (N-1)/F

 

 

 

 

 

 

Playback

 

 

 

 

 

 

 

 

deadline

 

 

 

 

Figure 10.3 GOP structure of MPEG-4 PFGS (progressive fine granularity scalable) scalable video. (Reproduced by permission of IEEE [50].)

packets among the classes. At the video application layer, each video packet is characterized based on its loss and delay properties, which contribute to the end-to-end video quality and service. Then, these video packets are classified and optimally mapped to the classes of link transmission module under the rate constraint [57, 58]. The video application layer QoS and link-layer QoS are allowed to interact with each other and adapt along with the wireless channel condition. The objective of these interaction and adaptation is to find a satisfactory QoS tradeoff so that each end user’s video service can be supported with available transmission resources.

Figure 10.3 shows a group-of-picture (GOP) structure of MPEG-4 PFGS ( progressive fine granularity scalable) [56]. Suppose that there are video frames and there are video layers in one GOP. Therefore, the loss of any video portion will affect the end-to-end video quality due to the interdependency within the encoding structure. Each video layer is packetized into several fixed-size packets before transmission. Each video packet cannot contain video data across video layers or video frames. Packets from the same video layer are put onto the same priority class. Furthermore, suppose that the video playback frame rate at the end user is fixed at F frames/s. If the mobile terminal starts to play back the first video frame of a GOP at time Tp, video frame n in the same GOP should be received and be ready to be displayed before time Td (n) = Tp + [(n 1)/F] for uninterrupted playback.

Let p = ( p1, p2, . . . , pM ) be the mapping policy from M video layers to K priority classes, where p j {0, 1, . . . , K } is the priority class that video layer j is transmitted. pj = 0 represents the fact that video layer j is abstained from transmission. The overall expected distortion from the mapping scheme p can be derived using the dependent structure of scalable video [50, 57], as shown in Figure 10.3, which can be expressed as follows:

M

 

D( p) = D0 D jp jj

(10.1)

j=0