Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Embedded Controller Hardware Design (Ken Arnold, 2000).pdf
Скачиваний:
223
Добавлен:
12.08.2013
Размер:
1.35 Mб
Скачать

113CHAPTER FOUR

Memory Technologies and Interfacing

example is when a 16-bit CPU’s program is burned into two 8-bit memories, one containing the lower byte and one containing upper byte of the instructions. When the bytes in the block of memory are summed, the answer is the same, even if the two devices are swapped! Thus, a serious and common error would not be discovered.

The CRC (cyclic redundancy code) is used to detect changes within a block of data or its order. The CRC is based on a polynomial that is calculated using shifts and XOR (exclusive OR) logic to generate a number that is dependent on the data and the order of the data. The detailed operation of a CRC is beyond the scope of this book, but is based on the same polynomials used for generating pseudo-random numbers. It is commonly used for checking blocks of data on magnetic storage devices and communication links.

Memory Management

In order to understand what memory management is, it’s helpful to understand the motivation behind its use. There are two kinds of memory management: memory address relocation and memory performance enhancement. They are often used in conjunction, as is commonly done in personal computers. This section covers the performance enhancement aspects, while the address relocation issues will be covered in Chapter Six.

The differences between different storage technologies, in terms of performance and cost, vary over many orders of magnitude. For example, semiconductor memory devices have access times that are many orders of magnitude faster (nanosecond vs. millisecond access time) than that of magnetic disks. Of course, magnetic disks also have a cost several orders of magnitude less than semiconductor memory on a cost per bit basis. This disparity in price and performance has lead to the idea of using small, fast memories to store the most frequently accessed subset of the complete collection of data present

in a larger, slower memory. This technique of buffering, often referred to as caching memory contents in a fast memory, is essentially similar whether it is applied to the memory attached to a CPU or the magnetic or optical storage mechanisms. In fact, there may be several layers of caching in a given system, starting with the smallest, fastest memory closest to the CPU, followed by slower but larger memories.

114EMBEDDED CONTROLLER

Hardware Design

Memory price is inversely proportional to speed, as indicated below:

Memory

RelativeAccess

Relative

 

type

size(Bytes)

Time(Sec)

cost/byte

 

 

 

 

Tape

1010

10

1

Disk

109

10-3

10

DRAM

106

10-7

102

SRAM

105

10-8

103

Cache Memory

When a high speed memory is used to provide rapid access to the CPU for most frequently used portion of main memory, it is referred to as a CPU cache memory. Likewise, when the main memory is used to provide rapid access to data stored on a disk, it is referred to as a disk cache.

The objective of these approaches is to maximize the likelihood that most pieces of data will be found in the small and fast memory most of the time, thus reducing the average effective access time. The object is to succeed at finding most data in the small fast memory most of the time, minimizing the number of accesses to the big slow memory. Fast SRAM is used as a fast temporary buffer (memory cache) between main memory and the CPU. Main memory DRAM is used to buffer disk data (disk cache). Most hard disk drives also have some internal fast semiconductor RAM to cache data as it is being transferred to and from the disk.

Virtual Memory

Disk storage can be used to emulate a larger primary memory than is actually available. Demand paged virtual memory provides an apparently large primary memory by swapping pages of data between real primary memory and disk.

This is a combination of hardware for translating logical (virtual) addresses, moving pages as needed, and operating system software to determine where and when pages should be kept and detect access attempts to pages which are not in primary memory.

When address relocation mechanisms are combined with disk caching and