Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
laboratory.doc
Скачиваний:
6
Добавлен:
01.07.2025
Размер:
5.68 Mб
Скачать

Optimizing Algorithms

From Amdahl's law it follows naturally, that the parallelizable part can be executed faster by throwing hardware at it. More threads / CPUs. The non-parallelizable part, however, can only be executed faster by optimizing the code. Thus, you can increase the speed and parallelizability of your program by optimizing the non-parallelizable part. You might even change the algorithm to have a smaller non-parallelizable part in general, by moving some of the work into the parallelizable part (if possible).

Optimizing the Sequential Part

If you optimize the sequential part of a program you can also use Amdahl's law to calculate the execution time of the program after the optimization. If the non-parallelizable part B is optimized by a factor of O, then Amdahl's law looks like this:

T(O,N) = B / O + (1 - B / O) / N

Remember, the non-parallelizable part of the program now takes B / O time, so the parallelizable part takes 1 - B / O time.

If B is 0.4, O is 2 and N is 5, then the calculation looks like this:

T(2,5) = 0.4 / 2 + (1 - 0.4 / 2) / 5

= 0.2 + (1 - 0.4 / 2) / 5

= 0.2 + (1 - 0.2) / 5

= 0.2 + 0.8 / 5

= 0.2 + 0.16

= 0.36

Execution Time vs. Speedup

So far we have only used Amdahl's law to calculate the execution time of a program or algorithm after optimization or parallelization. We can also use Amdahl's law to calculate the speedup, meaning how much faster the new algorithm or program is than the old version.

If the time of the old version of the program or algorithm is T, then the speedup will be

Speedup = T / T(O,N)

We often set T to 1 just to calculate the execution time and speedup as a fraction of the old time. The equation then looks like this:

Speedup = 1 / T(O,N)

If we insert the Amdahl's law calculation instead of T(O,N), we get the following formula:

Speedup = 1 / ( B / O + (1 - B / O) / N )

With B = 0.4, O = 2 and N = 5, the calculation becomes:

Speedup = 1 / ( 0.4 / 2 + (1 - 0.4 / 2) / 5)

= 1 / ( 0.2 + (1 - 0.4 / 2) / 5)

= 1 / ( 0.2 + (1 - 0.2) / 5 )

= 1 / ( 0.2 + 0.8 / 5 )

= 1 / ( 0.2 + 0.16 )

= 1 / 0.36

= 2.77777 ...

That means, that if you optimize the non-parallelizable (sequential) part by a factor of 2, and paralellize the parallelizable part by a factor of 5, the new optimized version of the program or algorithm would run a maximum of 2.77777 times faster than the old version.

Measure, Don't Just Calculate

While Amdahl's law enables you to calculate the theoretic speedup of parallelization of an algorithm, don't rely too heavily on such calculations. In practice, many other factors may come into play when you optimize or parallelize an algorithm.

The speed of memory, CPU cache memory, disks, network cards etc. (if disk or network are used) may be a limiting factor too. If a new version of the algorithm is parallelized, but leads to a lot more CPU cache misses, you may not even get the desired x N speedup of using x N CPUs. The same is true if you end up saturating the memory bus, disk or network card or network connection.

My recommendation would be to use Amdahl's law to get an idea about where to optimize, but use a measurement to determine the real speedup of the optimization. Remember, sometimes a highly serialized sequential (single CPU) algorithm may outperform a parallel algorithm, simply because the sequential version has no coordination overhead (breaking down work and building the total again), and because a single CPU algorithm may conform better with how the underlying hardware works (CPU pipelines, CPU cache etc).

Control questions:

  1. What is information technology?

  2. What does mean “hardware” and “software”?

  3. Give definition to term “information”.

  4. In what forms can information exist?

  5. What is the role of communication channel?

  6. List properties of information.

  7. Name units of information.

  8. How many bits are in 1 byte, 1 Kbyte, 1 Mbyte?

List of recommended references

  1. June J. Parsons and Dan Oja, New Perspectives on Computer Concepts 16th Edition - Comprehensive, Thomson Course Technology, a division of Thomson Learning, Inc Cambridge, MA, COPYRIGHT © 2014.

  2. Lorenzo Cantoni (University of Lugano, Switzerland) James A. Danowski (University of Illinois at Chicago, IL, USA) Communication and Technology, 576 pages.

  3. Craig Van Slyke Information Communication Technologies: Concepts, Methodologies, Tools, and Applications (6 Volumes). ISBN13: 9781599049496, 2008, Pages: 4288

  4. Brynjolfsson, E. and A. Saunders (2010). Wired for Innovation: How Information Technology Is Reshaping the Economy. Cambridge, MA: MIT Press

  5. Kretschmer, T. (2012), "Information and Communication Technologies and Productivity Growth: A Survey of the Literature", OECD Digital Economy Papers, No. 195, OECD Publishing.

Laboratory work №2

ARCHITECTURE OF COMPUTER. TYPES OF MEMORY. NUMBER SYSTEM.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]