Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Embedded Controller Hardware Design (Ken Arnold, 2001).pdf
Скачиваний:
151
Добавлен:
12.08.2013
Размер:
1.22 Mб
Скачать

206EMBEDDED CONTROLLER

Hardware Design

Processor Performance Metrics

In an effort to compare different types of computers, manufacturers have come up with a host of metrics to quantify processor performance. These metrics include:

The successful application of these devices in an embedded system usually hinges on the following characteristics:

IPS (instructions per second)

OPS (operations per second)

FLOPS (floating point OPS)

Benchmarks (standardized and proprietary “sample programs”) that are short samples indicative of processor performance in small application programs

IPS

IPS, or the more common forms, MIPS (millions of IPS) and BIPS (billions of IPS) are commonly thrown about, but are essentially worthless marketing hype because they only describe the rate at which the fastest instruction executes on a machine. Often that instruction is the NOP instruction, so 500 MIPS may mean that the processor can do nothing 500 million times per second!

OPS

In response to the weakness in the IPS measurement, OPS (as well as MOPS and BOPS, which sound fun at least) are instruction execution times based on a mix of different instructions. The intent is to use a standard execution frequency weighted instruction mix that more accurately represents the “nominal” instruc­ tion execution time. FLOPS (megaFLOPS, gigaFLOPS, etc.) are similar, except that they weight floating-point instructions heavily to represent heavy compu­ tational applications, such as continuous simulations and finite element analysis. The problem with the OPS metric is that the resulting number is heavily dependent upon the instruction mix that is used to compute it, which may not accurately represent the intended application instruction execution frequency.

207CHAPTER TEN

Other Useful Stuff

Benchmarks

Benchmarks are short, self-contained programs which perform a critical part of an application—such as a sorting algorithm—that are used to compare functionally equivalent code on different machine. The programs are run for some number of iterations, and the time is measured and compared with that of other CPUs. The weakness here is that the benchmark is not only a measure of the processor, but also of the programmer and the tools used to implement the program. As a result, the best benchmark is the one you write yourself, since it allows you to discover how efficiently the code you write will execute on a given processor with the tools available. That’s as close to the real appli­ cation performance as you’re likely to get, short of fully implementing the application on each processor under evaluation.

Device Selection Process

In selecting a device from a field of several devices, there is more to be consid­ ered than just the speed of the processor. Some factors, such as the availability of secondary suppliers may be an absolute requirement in some applications. In order to make a systematic evaluation and selection of the best alternative, the following method has proved to be valuable, particularly when the selec­ tion process must be documented and justified. The process consists of three major steps: eliminating the alternatives that are completely inappropriate, ranking the remaining options, and evaluating the adverse consequences of a catastrophic event.

The three decision matrices are:

1)Pass/fail criteria for elimination of non-conforming alternative.

2)Weighted scoring of parametric values to rank options.

3)Consideration of adverse consequences, including their probability and severity

The first matrix consists of a table with all the options on one axis and all the “must have” criteria on the other axis. Each criterion is checked off for each option. The second matrix consists of the surviving options from the first matrix on one axis of a table, and a list of quantitative measures on the other

208EMBEDDED CONTROLLER

Hardware Design

axis, along with a weighting factor for each measure, indicating its relative importance. Each option receives a weighted score allowing them to be ranked. Finally, each of the top ranking options is evaluated with respect to probability of occurrence. For instance, a dual source part that both manufacturers produce in the Silicon Valley could become totally unavailable from either source in the event of a major earthquake in that region. In that case, even though the prob­ ability of occurrence is very low, the consequences are very severe; production could be interrupted for a very long time from both sources simultaneously, causing the product they’re designed into to stop shipping for an indefinite period of time.