Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Data-Structures-And-Algorithms-Alfred-V-Aho

.pdf
Скачиваний:
122
Добавлен:
09.04.2015
Размер:
6.91 Mб
Скачать

Data Structures and Algorithms: CHAPTER 12: Memory Management

(1) var

p: integer; { the position of the current block }

gap: integer; { the total amount of empty space seen so far } begin

(2)p := left end of heap;

(4)gap := 0;

(5)while p ≤ right end of heap do begin

{let p point to block B }

(6)if B is empty then

(7)

gap := gap + count in block B

 

else { B is full }

(8)

forwarding address of B := p - gap;

(9)

p := p + count in block B

 

end

end;

Fig. 12.17. Computation of forwarding addresses.

Having computed forwarding addresses, we then look at all pointers to the heap.We follow each pointer to some block B and replace the pointer by the forwarding address found in block B. Finally, we move all full blocks to their forwarding addresses. This process is similar to Fig. 12.17, with line (8) replaced by

for i := p to p - 1 + count in B do heap[i - gap] := heap[i];

to move block B left by an amount gap. Note that the movement of full blocks, which takes time proportional to the amount of the heap in use, will likely dominate the other costs of the compaction.

Morris' Algorithm

F. L. Morris discovered a method for compacting a heap without using space in blocks for forwarding addresses. It does, however, require an endmarker bit associated with each pointer and with each block to indicate the end of a chain of pointers. The essential idea is to create a chain of pointers emanating from a fixed position in each full block and linking all the pointers to that block. For example, we see in Fig. 12.16(a) three pointers, A, D, and E, pointing to the leftmost full block. In Fig. 12.18, we see the desired chain of pointers. A chunk of the data of size equal to

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (28 of 33) [1.7.2001 19:32:20]

Data Structures and Algorithms: CHAPTER 12: Memory Management

that of a pointer has been removed from the block and placed at the end of the chain, where pointer A used to be.

Fig. 12.18. Chaining pointers.

The method for creating such chains of pointers is as follows. We scan all the pointers in any convenient order. Suppose we consider a pointer p to block B. If the endmarker bit in block B is 0, then p is the first pointer found that points to B. We place in p the contents of those positions of B used for the pointer chain, and we make these positions of B point to p. Then we set the endmarker bit in B to 1, indicating it now has a pointer, and we set the endmarker bit in p to 0, indicating the end of the pointer chain and the presence of the displaced data.

Suppose now that when we first consider pointer p to block B the endmarker bit in B is 1. Then B already has the head of a chain of pointers. We copy the pointer in B into p, make B point to p, and set the endmarker bit in p to 1. Thus we effectively insert p at the head of the chain.

Once we have all the pointers to each block linked in a chain emanating from that block, we can move the full blocks far left as possible, just as in the simpler algorithm previously discussed. Lastly, we scan each block in its new position and run down its chain of pointers. Each pointer encountered is made to point to the block in its new position. When we encounter the end of the chain, we restore the data from B, held in the last pointer, to its rightful place in block B and set the endmarker bit in the block to 0.

Exercises

Consider the following heap of 1000 bytes, where blank blocks are in use,

12.1and the labeled blocks are linked on a free list in alphabetical order. The numbers indicate the first byte in each block.

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (29 of 33) [1.7.2001 19:32:20]

Data Structures and Algorithms: CHAPTER 12: Memory Management

Suppose the following requests are made:

i.allocate a block of 120 bytes

ii.allocate a block of 70 bytes

iii.return to the front of the available list the block in bytes 700849

iv.allocate a block of 130 bytes.

Give the free list, in order, after executing the above sequence of steps, assuming free blocks are selected by the strategy of

a.first fit

b.best fit.

12.2

Consider the following heap in which blank regions are in use and labeled regions are empty.

Give sequences of requests that can be satisfied if we use

a.first fit but not best fit

b.best fit but not first fit.

Suppose we use an exponential buddy system with sizes 1, 2, 4, 8, and 16 on a heap of size 16. If we request a block of size n, for 1 ≤ n ≤ 16, we must allocate a block of size 2i, where 2i-1 < n ≤ 2i. The unused portion of the block, if any, cannot be used to satisfy any other request. If we need a block of size 2i, i < 4, and no such free block exists, then we first find a block of size 2i+1 and split it into two equal parts. If no block of size 2i+1 exists, we first find and split a free block of size 2i+2, and so on. If we find ourselves looking for a free block of size 32, we fail and cannot satisfy the request. For the purposes of this question, we never combine adjacent free blocks in the heap.

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (30 of 33) [1.7.2001 19:32:20]

Data Structures and Algorithms: CHAPTER 12: Memory Management

12.3There are sequences of requests a1, a2, . . . , an whose sum is less

than 16, such that the last request cannot be satisfied. For example, consider the sequence 5, 5, 5. The first request causes the initial block of size 16 to be split into two blocks of size 8 and one of them is used to satisfy the request. The remaining free block of size 8 satisfies the second request, and there is no free space to satisfy the third request.

Find a sequence a1, a2, . . . , an of (not necessarily identical) integers between 1 and 16, whose sum is as small as possible, such that, treated as a sequence of requests for blocks of size a1, a2, . . . , an, the last request cannot be satisfied. Explain why your sequence of requests cannot be satisfied, but any sequence whose sum is smaller can be satisfied.

Consider compacting memory while managing equal-sized blocks. Assume each block consists of a data field and a pointer field, and that we have marked every block currently in use. The blocks are currently

12.4

located between memory locations a and b. We wish to relocate all active blocks so that they occupy contiguous memory starting at a. In relocating a block remember that the pointer field of any block pointing to the relocated block must be updated. Design an algorithm for compacting the blocks.

Consider an array of size n. Design an algorithm to shift all items in the array k places cyclically counterclockwise with only constant additional

12.5memory independent of k and n. Hint. Consider what happens if we reverse the first k elements, the last n-k elements, and then finally the entire array.

Design an algorithm to replace a substring y of a string xyz by another

12.6substring y' using as little additional memory as possible. What is the time and space complexity of your algorithm?

12.7

Write a program to make a copy of a given list. What is the time and space complexity of your program?

12.8

Write a program to determine whether two lists are identical. What is the time and space complexity of your program?

12.9Implement Morris' heap compaction algorithm of Section 12.6.

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (31 of 33) [1.7.2001 19:32:20]

Data Structures and Algorithms: CHAPTER 12: Memory Management

Design a storage allocation scheme for a situation in which memory is *12.10 allocated and freed in blocks of lengths one and two. Give bounds on

how well your algorithm works.

Bibliographic Notes

Efficient storage management is a central concern in many programming languages, including Snobol [Farber, Griswold, and Polonsky (1964)], Lisp [McCarthy (1965)], APL [Iverson (1962)], and SETL [Schwartz (1973)]. Nicholls [1975] and Pratt [1975] discuss storage management techniques in the context of programming language compilation.

The buddy system of storage allocation was first published by Knowlton [1965]. Fibonacci buddy systems were studied by Hirschberg [1973].

The elegant marking algorithm for use in garbage collection was discovered by Peter Deutsch (Deutsch and Bobrow [1966]) and by Schorr and Waite [1967]. The heap compaction scheme in Section 12.6 is from Morris [1978].

Robson [1971] and Robson [1974] analyzes the amount of memory needed for dynamic storage allocation algorithms. Robson [1977] presents a bounded workspace algorithm for copying cyclic structures. Fletcher and Silver [1966] contains another solution to Exercise 12.5 that uses little additional memory.

Each programming language must provide for itself a method of representing the current set of variables, and any of the methods discussed in Chapters 4 and 5 is appropriate. For example, most implementations use a hash table to hold the variables.

This awkwardness is made necessary by peculiarities of Pascal.

Note that in Fig. 12.1, instead of a count indicating the length of the block, we used the length of the data.

The reader should, as an exercise, discover how to maintain the pointers when a block is split into two; presumably one piece is used for a new data item, and the other remains empty.

If c - d is so small that a count and pointer cannot fit, we must use the whole block and delete it from the available list.

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (32 of 33) [1.7.2001 19:32:20]

Data Structures and Algorithms: CHAPTER 12: Memory Management

Actually, there is a minimum block size larger than 1, since blocks must hold a pointer, a count and a full/empty bit if they are to be chained to an available list.

Since empty blocks must hold pointers (and, as we shall see, other information as well) we do not really start the sequence of permitted sizes at 1, but rather at some suitably larger number in the sequence, say 8 bytes.

Of course, if there are no empty blocks of size si+1, we create one by splitting a block of size si+2, and so on. If no blocks of any larger size exist, we are effectively out of space and must reorganize the heap as in the next section.

‡ Incidentally, it is convenient to think of the blocks of sizes si and si - k making up a block of size si + 1 as "buddies," from whence comes the term "buddy system."

As in the previous section, we must assume that one bit of each block is reserved to tell whether the block is in use or empty.

In all that follows we assume the collection of such pointers is available. For example, a typical Snobol implementation stores pairs consisting of a variable name and a pointer to the value for that name in a hash table, with the hash function computed from the name. Scanning the whole hash table allows us to visit all pointers.

Table of Contents

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/mf1212.htm (33 of 33) [1.7.2001 19:32:20]

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_1.gif

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_1.gif [1.7.2001 19:32:23]

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_3.gif

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_3.gif [1.7.2001 19:32:35]

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_4.gif

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_4.gif [1.7.2001 19:32:50]

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_5.gif

http://www.ourstillwaters.org/stillwaters/csteaching/DataStructuresAndAlgorithms/images/f11_5.gif [1.7.2001 19:32:56]

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]