- •1.3.Describe the OpenMp programming model and compile programs with OpenMp.
- •1.4.Make definition of the classification of computer architectures, classification of parallel computing systems
- •1.5.Describe the process of setting the number of parallel streams in OpenMp.
- •1.8.Critically evaluate Synchronization in OpenMp.
- •1.9.Describe Directive #pragma omp for.
- •1.11.Specify iteration distribution modes cycle in OpenMp threads
- •1.12.Define the Amdahl's Law and its meaning for the programmer
- •1.13.Define the concepts of Stream, thread, process. Describe flow differences (filament) from the process. Processes
- •Threads
- •1.14.Describe Directive #pragma omp parallel. Purpose. The omp parallel directive explicitly instructs the compiler to parallelize the chosen block of code.
- •Parameters. Clause is any of the following clauses:
- •1.15.Critically evaluate locks (mutexes) in OpenMp
- •1.17. Critically evaluate the mpi programming model. Specify Communicator and ranks in the mpi.
- •1.18. Write, how the performance of computers is measured, in what units is measured, according to what law is growing, what are the limits of growth
- •1.20.Describe the function of creating and stop pThreads threads
- •2.1.Give an example of identification streams.
- •2.2.Prove the importance of protecting shared data with a mutex
- •2.3.Specify required elements of the program in mpi
- •2.4.Prove the importance of working with time in mpi. Describe procedure Measurements time of the program portion.
- •2.5.Passing arguments specify flow function
- •2.6.Provide examples of basic stream control operation
- •2.7. Prove the importance of data separation problems between the streams.
- •2.8.Prove the importance of using nVidia cuda in parallel programs.
1.12.Define the Amdahl's Law and its meaning for the programmer
Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup.
The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is: S=T(1)/T(n).
Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider.
Why actual speed ups are always less ? Distributing work to the parallel processors and collecting the results back together is extra work required in the parallel version which isn't required in the serial version.
When the program is executing in the parallel parts of the code, it is unlikely that all of the processors will be computing all of the time as some of them will likely run out of work to do before others are finished their part of the parallel work.
1.13.Define the concepts of Stream, thread, process. Describe flow differences (filament) from the process. Processes
Process : "An execution stream in the context of a particular process state." 1.Execution stream: a sequence of instructions (only one thing happens at a time). 2.Process state: everything that can affect, or be affected by, the process: code, data, call stack, open files, network connections, etc.
Is a process the same as a program?
Uniprogramming system: only supports a single process. Simplifies some parts of OS, but many other things are hard to do. 1.Some early personal computer operating systems were uniprogramming (e.g. MS-DOS), but these systems are almost unheard of today. 2.This is not called "uniprocessing": that refers to a system with only one processor.
Virtually all modern operating systems are multiprogramming systems: multiple processes can exist simultaneously and share the machine.
Threads
Most modern operating systems also support threads: multiple execution streams within a single process 1.Threads share process state such as memory, open files, etc. 2.Each thread has a separate stack for procedure calls (in shared memory) 3.Thread is unit of sequential execution
Why support threads? 1.Concurrent execution on multiprocessors 2.Manage I/O more efficiently: some threads wait for I/O while others compute 3.Most common usage for threads: large server applications
1.14.Describe Directive #pragma omp parallel. Purpose. The omp parallel directive explicitly instructs the compiler to parallelize the chosen block of code.
#pragma omp parallel{
#pragma omp for
for(int i = 1; i < 100; ++i){
...}}
