Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
laboratory.doc
Скачиваний:
6
Добавлен:
01.07.2025
Размер:
5.68 Mб
Скачать

Amdahl's Law

Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup.

Speedup:

The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:

T(1)

-------------

T(j)

Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar.

To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time of one worker working alone. Now, if we assume the strictly serial part of the program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the formula for speedup as:

N

S = -----------------------

(B*N)+(1-B)

Начало формы

N =   processors

B =  % of algorithm that is serial

Конец формы

This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:

For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit co-operative solution...The nature of this overhead (in parallelism) appears to be sequential so that it is unlikely to be amenable to parallel processing techniques. Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor...At any point in time it is difficult to foresee how the previous bottlenecks in a sequential computer will be effectively overcome.

Let us investigate speedup curves:

Now that we have determined speedup and efficiency, let us turn to using this information to make sense of Amdahl's Law. We will refer to a Speedup Curve to do this. A Speedup Curve is simply a graph with an X-axis of the number of processors, compared against a Y-axis of the speedup. The best speed we could hope for, S = N, would yield a 45 degree curve. That is, if there were ten processors, we would realize a ten fold speedup. Anything better would mean that the program ran faster on a single processor than in parallel, which would not make it a good candidate for parallel computing. When B is constant (recall B = the percentage of the strictly parallel portion of the program), Amdahl's Law yields a speedup curve which is logarithmic and remains below the line S=N. This law shows that it is indeed the algorithm and not the number of processors which limits the speedup. Also note that as the curve begins to flatten out, efficiency is drastically being reduced.

Amdahl's law can be used to calculate how much a computation can be sped up by running part of it in parallel. Amdahl's law is named after Gene Amdahl who presented the law in 1967. Most developers working with parallel or concurrent systems have an intuitive feel for potential speedup, even without knowing Amdahl's law. Regardless, Amdahl's law may still be useful to know.

I will first explain Amdahl's law mathematically, and then proceed to illustrate Amdahl's law using diagrams.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]